International Journal of Innovative Research in Computer and Communication Engineering https://myresearchjournals.com/index.php/IJIRCCE <p>The International Journal of Innovative Research in Computer and Communication Engineering is a High Impact Factor, Open Access, International, Peer-Reviewed, Monthly journal that publishes original research articles in the fields of Computer Science, Information Technology and Communication Engineering. It has proficient academicians and researchers as editorial board members across the world. The core vision of IJIRCCE is to propagate the innovative information and technology to promote the academic and research professionals in the field of Computer Science, Information Technology and Communication Engineering. The journal also invites clearly written reviews, short communications and notes dealing with numerous disciplines covered by the fields. We accept extended version of papers previously published in conferences and/or journals.</p> Ess & Ess Research Publications en-US International Journal of Innovative Research in Computer and Communication Engineering 2320-9798 An Efficient Way for Detecting Bad Customer Reviews Using NLP https://myresearchjournals.com/index.php/IJIRCCE/article/view/14865 In today’s world of digitization and high availability of internet reviews given by customer play a very important role in defining the next buying pattern of a customer. A number of companies like amazon, flipkart etc provides platform for their customers to give their real experience about the product so that they can boost up the sales and be able to identify their prospective customers. Natural Language Processing plays an important role in identifying customers based on their positive or negative review. In this paper we are deploying NLP techniques for identifying the positive or negative sentiment using sentiment analysis. Vishal Paranjape Saurabh Sharma ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 1 7 10.15680/IJIRCCE.2024.1203501 Boosting Security in the Age of Artificial Intelligence: An In-depth Analysis of Sophisticated Watermarking Methods and Their Challenges https://myresearchjournals.com/index.php/IJIRCCE/article/view/14866 In the present era with the widespread use of artificial intelligence (AI) and digital content, the importance of watermarking techniques has increased. This survey “Survey on Watermarking Methods in the AI Domain and Beyond” provides a detailed review of the applications and developments of watermarking techniques in various fields. It highlights the latest technologies in watermarking in digital media, software, and especially AI models. The survey also highlights how these technologies are helpful in protecting intellectual property, data integrity and authentication. Additionally, it discusses the challenges and future directions of watermarking, including the requirements of robustness, invisibility, and resistance against attacks. The survey aims to provide researchers, developers, and industry experts a better understanding of the current progress and upcoming opportunities in this field. Saurabh Verma Mukta Bhatele Akhilesh A. Waoo ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 8 12 10.15680/IJIRCCE.2024.1203502 The Influence of Artificial Intelligence on Cybersecurity https://myresearchjournals.com/index.php/IJIRCCE/article/view/14867 As the digital landscape continues to evolve, the integration of Artificial Intelligence (AI) into various sectors has become increasingly prevalent. One significant area where AI is exerting a profound impact is data system security. This paper explores the multifaceted influence of AI on enhancing the security measures employed to safeguard sensitive data within diverse technological environments. The first section elucidates how AI technologies, particularly machine learning algorithms, empower security systems to adapt dynamically to emerging threats. By analyzing vast datasets in real-time, AI-driven security solutions can identify anomalous patterns and potential vulnerabilities, enabling proactive threat mitigation and rapid response capabilities. Furthermore, the paper discusses the role of AI in automating routine security tasks, thereby alleviating the burden on human operators and reducing the likelihood of human error. Through intelligent automation, AI streamlines security operations, enhances efficiency, and enables organizations to allocate resources more effectively to address sophisticated cyber threats. Moreover, the utilization of AI for predictive analytics is explored, highlighting its ability to forecast potential security breaches based on historical data and emerging trends. By leveraging predictive insights, organizations can preemptively fortify their defenses, preempting cyberattacks before they occur and minimizing potential damages. Additionally, the paper examines the ethical considerations inherent in AI-driven security systems, emphasizing the importance of transparency, accountability, and bias mitigation. While AI holds immense potential for bolstering data system security, ethical dilemmas such as privacy infringements and discriminatory practices necessitate careful scrutiny and regulatory oversight. Lastly, the paper outlines future directions and challenges in leveraging AI for data system security, including the need for continued research and development to enhance the robustness and reliability of AI algorithms, as well as the imperative for collaboration between industry stakeholders, policymakers, and cybersecurity experts to navigate the evolving threat landscape effectively. As the digital landscape continues to evolve, the integration of artificial intelligence (AI) has become increasingly imperative in fortifying security measures across various domains. This abstract delves into the multifaceted role of AI in bolstering security, encompassing its applications in threat detection, anomaly recognition, and decisionmaking processes. In conclusion, this paper underscores the transformative impact of AI on data system security, illuminating its capacity to revolutionize threat detection, mitigation, and response strategies. By harnessing the power of AI-driven technologies responsibly and ethically, organizations can fortify their defenses against an increasingly sophisticated array of cyber threats, thereby safeguarding critical data assets and preserving trust in digital ecosystems. Suman Kashyap ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 13 22 10.15680/IJIRCCE.2024.1203503 QOS in Wireless Sensor Network-Fault Tolerance and Efficient Bandwidth Allocation https://myresearchjournals.com/index.php/IJIRCCE/article/view/14868 Wireless sensor networks are an integral part of Industrial Revolution 4.0. The traffic efficiency of any network depends on link failures and available bandwidth. This paper discusses route discovery in case of a link failure and a control system approach for efficient bandwidth allocation. Control system approach has been attempted to address the twin issues of link failure and bandwidth allocation. The main function of Wireless networks is to gather information about various types of parameters, notably process variables in industries, air quality variables in ambient air monitoring, weather forecasting, cyclonic alertness, congestion and other parameters in traffic routing. The outputs from WSN are feed into Plant systems either to display or control the variables. Naturally for efficient, reliable display and control of parameters continuous supply of information from WSN to Plant systems is important. Hence improving Quality of Service Parameters like signal to noise ratio, fault tolerance, Band width allocation, and stability and load balancing is vital while designing a WSN. We discuss about two QOS parameters in WSN namely Fault Tolerance in case of Link failures and Efficient Bandwidth Allocation. Links here are wireless in nature. Deepa Naik ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 23 29 10.15680/IJIRCCE.2024.1203504 Attrition Analytics: Unveiling the Best Model for Predicting Employee Retention https://myresearchjournals.com/index.php/IJIRCCE/article/view/14869 Organizations in all sectors are very concerned about employee attrition as it affects output, morale, and general performance. Keeping an even and productive workforce requires anticipating and controlling attrition. In this work, we investigate how well different machine learning models predict attrition among employees. We evaluate the effectiveness of Decision Tree, Random Forest, Logistic Regression, and Naive Bayes classifiers on a dataset that includes performance, job-related, and demographic characteristics of employees. GridSearchCV and other hyper parameter tweaking approaches are used to maximize model performance. Our findings provide important new information on how well various machine learning methods predict attrition. The study's conclusions extend the field of attrition analytics and offer insightful advice to businesses looking to improve retention tactics and reduce employee attrition. Divya Pandey Zeba Vishwakarma Mallika Dwivedi ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 30 37 10.15680/IJIRCCE.2024.1203505 Enhancing Cloud Data Security through the Integration of Machine Learning Model & Homomorphic Encryption Technology https://myresearchjournals.com/index.php/IJIRCCE/article/view/14870 In the last decades, Cloud Computing trend has got a boost growth. The main reason behind was establishment of Large & Small Organization of every field. Due to its privileges and flexible nature, numerous organizations adopt it. The main benefit customer get is that they do not have to spend huge amount for the system they want to compute their task, but Cloud provide resources for that like software, hardware, data storage etc. The significant challenge that the Cloud Providers and Customer are facing was security concern towards data which is stored on the third party cloud server. It is the trust of the customer to Providers that their data is safe evens the data accessible or travelling over the internet or over the network which is essential for the honor of the service providers company. So here we are reviewing a Machine Learning trained Model that can analyze access patterns and data usage frequently. By monitoring these patterns and frequency of data usage, Model predict Security threats and apply a Homomorphic Encryption Technology on the data before transmit over the Network which results the enhancement of the Privacy & Security. Pankaj Pali Jaya Choubey Abhishek Patel ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 38 42 10.15680/IJIRCCE.2024.1203506 Framework for Detecting Distributed Denial of Services Attack in Cloud Environment https://myresearchjournals.com/index.php/IJIRCCE/article/view/14871 Distributed Denial of Service (DDoS) persists in Online Applications as One of those significant threats. Attackers can execute DDoS by the more natural steps. Then with the high productivity to slow the consumer access services down. To detect an attack on DDoS and using machine learning algorithms. The Overseen to detect and mitigate the attack, machine learning algorithms such as Naive Bayes, decision tree, k-nearest neighbours (k-NN) and random forest are used. There are three steps: gathering information, preprocessing and feature Extraction in "Normal or DDoS" classification algorithm for detection Attack use Dataset NSL-KDD. Similar algorithms have different functions Conduct that is dependent on the features selected. DDOS- attack performance Detection is compared, and it indicates the best algorithm. Attempts at Distributed Denial of Service ( DDoS) Were the most powerful attacks of the last period. A Virtual Network. The intrusion detection system (NIDS) should be designed seamlessly to Fight the latest strategies and trends of those attackers NIDS on DDoS. In this paper, we propose an NIDS capable of detecting Current DDoS attacks, as well as new forms. The main characteristic of Our NIDS is the combination of various classifiers using an ensemble Models, with the concept of each classifier being able to target different Aspects/types of intrusions, and more effective in doing so Mechanism for protecting against new intrusions. Additionally, we perform a detailed study of and based on, DDoS attacks check the reduced set of functions [27, 28] to be essential to Enhance accuracy. We are playing with and analyzing NSL-KDD Dataset with a feature set reduced, and our proposed NIDS will Detect 99.1 per cent of active DDoS attacks. Let's compare our Tests with other methods which already exist. Our approach to NIDS has Able to learn to keep up with existing and evolving DDoS Attack trends. Manish Kumar Rajak Ravindra Tiwari ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 43 48 10.15680/IJIRCCE.2024.1203507 Optimizing Hyperparameters for Advanced Deep Neural Networks to Predict Solar Still Efficiency https://myresearchjournals.com/index.php/IJIRCCE/article/view/14872 As the demand for sustainable water treatment solutions grows, passive solar distillation emerges as a promising technology for the efficient desalination of brackish water. This study explores the optimization of solar distillation systems for use in various sectors such as residential, agricultural, and industrial. The performance of these systems is influenced by several factors, including solar radiation, ambient conditions, wind speed, and the design of the system itself. By adopting cutting-edge machine learning techniques, this research presents a novel method employing Deep Neural Networks, with a focus on the Multilayer Perceptron (MLP) model, to enhance the accuracy of yield predictions. Through a detailed comparison of different hyperparameter optimization methods, the integration of Particle Swarm Optimization (PSO) with the MLP model was identified as the most effective approach. This combined PSO-MLP model, particularly when applied to a specific design of solar collectors, achieved remarkable results, highlighted by a Coefficient of Determination (COD) of 0.98167 and a Mean Squared Error (MSE) of 0.00006. The study illustrates the profound potential of advanced computational techniques in improving the efficiency of solar distillation systems, contributing valuable insights to the field of sustainable water purification. Halimi Soufiane Benmoussa Ahmed Hamidatou Taha Anfal Benrezkallah Soualah Mondir Amira . Mohammed Toufik Babahammou Hammou Ridha ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 49 57 10.15680/IJIRCCE.2024.1203508 Comparative Analysis of Supervised and Unsupervised Learning Methods for Pattern Classification https://myresearchjournals.com/index.php/IJIRCCE/article/view/14873 In the higher learning system, this article compares and contrasts supervised and unsupervised learning approaches to see which is more effective for classifying patterns. Among the most significant uses of machine learning algorithms is classification. Our research shows that, although the supervised learning algorithm, Backpropagation learning with errors, does a great deal of nonlinear real-time assignments, the unsupervised learning algorithm, Kohonen Self-Organizing Map (KSOM), performs very well in our study's classification tasks. Ranu Sahu Khushboo Choubey ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 58 63 10.15680/IJIRCCE.2024.1203509 Investigating Optimization Methods and Loss Functions for Training Neural Networks: A Comparative Study https://myresearchjournals.com/index.php/IJIRCCE/article/view/14874 In training neural networks for plant disease prediction, this study examines the efficacy of several optimization techniques and loss functions. Various combinations of optimizers and loss functions are investigated to assess their effect on model performance using a comparative research technique. The study makes use of a dataset that includes pictures of both healthy and diseased plants together with labels indicating whether or not the plants are diseased. Plant disease prediction models can be made more accurate by employing the best optimization strategies and loss functions, which are discovered through extensive testing and research. The results further our knowledge of the best practices for neural network training in agricultural applications, especially with regard to disease control and plant health monitoring. Mallika Dwivedi Divya Pandey Jaya Choubey ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 64 68 10.15680/IJIRCCE.2024.1203510 Artificial Intelligence and Machine Learning in Sport Medicines https://myresearchjournals.com/index.php/IJIRCCE/article/view/14875 Orthopedic sports medicine is starting to feel the impact of machine learning (ML), which is transforming healthcare procedures. Orthopedic sports medicine professionals can now analyze enormous volumes of patient data to obtain insights that were previously unreachable through traditional approaches by utilizing machine learning algorithms .Large datasets can be tested more easily with machine learning to find complex saga between input and output variables. These correlations may be more complicated than what can be achieved with conventional statistical techniques, allowing for precise output predictions. For healthcare data, supervised learning is the most popular machine learning technique. Supervised learning algorithms have been applied in recent research to forecast individual patient outcomes after surgery, such as hip arthroscopy. These algorithms have the ability to improve postoperative care, optimize surgical procedures, and improve preoperative planning by utilizing extent volumes of patient data, which will ultimately improve patient outcomes in orthopedic surgery. Kuldeep Soni Nidhi Pateria Gulafsha Anjum ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 69 73 10.15680/IJIRCCE.2024.1203511 Combating Lonaliness Using AI https://myresearchjournals.com/index.php/IJIRCCE/article/view/14876 In today’s pervasive loneliness pandemic, affecting all age groups, innovative technologies like replicas, chatbots, and robots emerge as potent antidotes. Be- yond age barriers, from the young to the elderly, the epidemic of isolation looms large. These advanced technologies, with their empathetic interfaces, offer a transformative solution, mitigating loneliness by fostering meaningful connec- tions. This abstract envisions a future where artificial companionship becomes a pivotal force in elevating well-being, providing a respite from the prevailing solitude. Through replicative technologies, society may find solace in an inter- connected world, where the quest for companionship is met with the reassuring presence of artificial entities. Roopali Kachhi ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 74 77 10.15680/IJIRCCE.2024.1203512 Predicting Pizza Costs: An Evaluation of Random Forest and TPOT AutoML https://myresearchjournals.com/index.php/IJIRCCE/article/view/14877 In recent years, pizza has become more and more popular in India. Pizza has been available in the US for a while, but in recent years, its craze has really taken off. Important pizza-related businesses are thus seeing great investment and development opportunities in this area. Pizza is quickly rising in popularity as the preferred food for many Indian residents, and the number of restaurants serving it is expanding. In order to anticipate pizza prices, this study assesses how effective the two machine learning techniques Random Forest and TPOT (Tree-based Pipeline Optimization Tool) AutoML are. For model training and assessment, a dataset comprising several pizza variables, like diameter, toppings, and extra ingredients, was utilized. Metrics like mean squared error and R-squared were used to evaluate the prediction accuracy of both models after they had been trained and tested according to normal protocols. The findings suggest that while TPOT AutoML performs somewhat better in some circumstances, Random Forest and TPOT AutoML both exhibit encouraging performance in forecasting pizza costs. These results demonstrate how well machine learning methods work to forecast intricate pricing schemes in the food sector. Abhishek Singh Zohaib Hasan Nirdesh Jai ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 78 85 10.15680/IJIRCCE.2024.1203513 A Review Paper on Machine Learning in Pattern Recognition https://myresearchjournals.com/index.php/IJIRCCE/article/view/14878 Supervised or unsupervised classification is the main goal of pattern recognition. The statistical approach is the most popular approach that prevails among the many frameworks in which pattern recognition is initially formulated. Recently, more attention has been paid to neural network techniques and methods derived from statistical learning theory. This requires attention to the design of the sensor system. There are various issues related to the design of identification systems. They are the definition of pattern classes, the detection and extraction environment, representation and feature selection, cluster analysis, classifier design, learning and selection of training and test samples. There is no solution to the general problem of complex pattern recognition associated with arbitrary patterns. Data mining, web search, and multimedia retrieval have several emerging applications that require appropriate and effective tuning techniques. The main goal of this article is to provide a detailed description of different methods that can be used in different stages of a pattern recognition system. The purpose of this paper is also to explore research topics in application that can be highlighted in this challenging field. Nidhi Pateriya Neha Thakre Gulafsha Anjum ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 86 90 10.15680/IJIRCCE.2024.1203514 Machine Learning in Healthcare: A Deep Dive into Classification, Limitations, Prospects, And Hurdles https://myresearchjournals.com/index.php/IJIRCCE/article/view/14879 Recently, a range of advanced techniques, such as artificial intelligence and machine learning, have been used to analyse health-related data. Machine learning applications are helping medical practitioners become more proficient in diagnosing and treating patients. Numerous academics have used medical data to find trends and diagnose illnesses. There aren't many papers in the literature right now that discuss using machine learning algorithms to increase the efficiency and accuracy of healthcare data. We investigated how well time series healthcare parameters for heart rate data transfer (accuracy and efficiency) may be enhanced using machine learning methods. We explored a number of machine learning techniques for use in healthcare applications in this research. Following a thorough introduction and analysis of supervised and unsupervised machine learning algorithms. Gulafsha Anjum Neha Thakre Kuldeep Soni ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 91 96 10.15680/IJIRCCE.2024.1203515 Comparative Study of Neural Network Models for Prediction of Chaotic System https://myresearchjournals.com/index.php/IJIRCCE/article/view/14880 Research into the ability to forecast the internal dynamics of chaotic systems is vital. The chaotic character of climatic systems is a major barrier to accurate climate prediction. Numerous studies have been conducted so far on the topic of numerical simulation approaches for the simulation and prediction of chaotic systems. Chaotic systems are difficult to anticipate with numerical simulation approaches due to issues like sensitivity to beginning values, error accumulation, and inappropriate parameterization of physical processes. Here, we looked into the architectures of Neural Networks. The present literature study has provided confidence in the efficacy of artificial neural networks as a powerful tool for predicting internal dynamics climatological data. Research demonstrates that ANNs have the potential for prediction of climatological analysis since they are ideally adapted to situations requiring complex nonlinear interactions. Ratnesh Kumar Namdeo Om Prakash Chandrakar ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 97 102 10.15680/IJIRCCE.2024.1203516 Comparison of Sentiment Analysis using VADER and RoBERTa https://myresearchjournals.com/index.php/IJIRCCE/article/view/14881 Analysing human emotions about a product or an incident is called sentiment analysis. In the current world where there are uncountable number of platforms where people free express their feelings, there is an overload of sentiment based data available. It has become a challenge to gather this data and get some useful information out of it. Sentiment analysis has become an important domain even before the time when various sophisticated techniques of machine learning and deep learning were available. By analyzing social media, product reviews, and customer support interactions, companies can promptly respond to emerging trends, manage their brand reputation, and enhance customer satisfaction. This paper makes an effort to analyze the sentiments of people about a product in Amazon. This paper first uses very basic machine model called VADER (Valence Aware Dictionary and sEntiment Reasoner) and then a more sophisticated Deep Learning approach called RoBERTa (Robustly optimized Bidirectional Encoder Representation from Transformers approach). Zohaib Hasan Abhishek Singh Rajender Kumar Yadav ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 103 109 10.15680/IJIRCCE.2024.1203517 Sentiment Analysis View for Brand Reputation Monitoring on Social Media https://myresearchjournals.com/index.php/IJIRCCE/article/view/14882 A comprehensive examination of brand management and reputation in the digital era, highlighting the significance of creating a distinct identity for companies or products to stand out in a crowded market. The emphasis on brand credibility as a key factor in consumer choice environments underscores the necessity for companies to establish and maintain a positive image. The role of online reputation management, particularly on social media platforms, is rightly spotlighted as a criticalaspect of modern brand strategy. This approach not only involves monitoring and influencing how a brand is perceived across various online channels but also leveraging the power of social media influencers and internal stakeholders to enhance brand image and credibility. This paper exploration of how companies co-create and manage their brand image and reputation through strategic engagement with internal stakeholders and social media influencers, alongside the use of sentiment analysis for brand reputation analysis, offers valuable insights into effective brand management strategies in the digital age. This comprehensive approach to understanding and influencing brand perception online is crucial for companies looking to maintain a competitive edge and foster positive relationships with their consumers. Pallavi Suryavanshi Sharad Gangele ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 110 115 10.15680/IJIRCCE.2024.1203518 Applying Machine Learning Frameworks to Analyze Large Amounts of Data from IOT Networks in Order to Improve the Efficiency of Cloud Computing Applications: A Review https://myresearchjournals.com/index.php/IJIRCCE/article/view/14883 Next-generation cloud computing designs emerged as a response to shortcomings of trend cloud computing concepts. Shallow intelligent algorithms are unable to handle the massive volumes of data produced by the developing cloud computing infrastructures. Researchers have lately begun to pay close attention to “deep learning algorithms” because of their capacity to handle large-scale datasets and use them to address issues in newly developed cloud computing systems. On the other hand, there is currently no thorough literature study available on the application of “deep learning architectures in cloud computing” system development to address challenging issues. We carried out a broad writing review on the uses of “deep learning” architectures in cutting-edge “cloud computing” systems in order to close this gap. There are ramifications for data generation with the new “cloud computing” systems. The massive amounts of data produced by the new paradigms are known as "big data." There's a chance that the data generated won't get the chance to be examined to find out fresh information. Shivam Tiwari Abhishek Vishwakarma ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 116 118 10.15680/IJIRCCE.2024.1203519 Study On Evolution in Image Processing Using Deep Learning https://myresearchjournals.com/index.php/IJIRCCE/article/view/14884 The amount of data we create and use is going to be huge, around 180 zettabytes by 2025, causing big challenges for companies and society. The data is not only getting bigger but also more complicated, making it harder to understand and process. In the last 20 years, tools for studying this data, called data science tools, have become really popular because they are good at handling complex information with high accuracy. Dealing with pictures is even tougher because as pictures get better, there's more data to handle. While traditional ways of teaching computers (machine learning) are still used a lot, scientists are excited about new ways using artificial intelligence (AI), especially neural networks, to understand and work with images. Our study looked at how AI is getting better and how we can make it work even more efficiently with pictures. Even though we've made progress, there are still many challenges ahead. We talk about the recent improvements and suggest what future research could look like in this rapidly changing field. Neha Thakre Nidhi Pateriya Kuldeep Soni ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 119 124 10.15680/IJIRCCE.2024.1203520 Predicting Cardiac Arrest Using Machine Learning with Zero Coding https://myresearchjournals.com/index.php/IJIRCCE/article/view/14885 Heart disease cases are rising rapidly day by day, thus it's very difficult to predict any likely illness in advance. But to predict a heart ailment in advance is not an easy task. The consequences of several medical and pathological tests have to be combined in a complicated manner in order to diagnose cardiac disease. Doctors, researchers, and academicians are extremely interested in how Artificial Intelligence and Machine Learning can be used to anticipate heart illnesses because of the ominous complexity of the problem and the outstanding improvements in machine learning technology. ML has previously generated forecasts that were fast, precise, and efficient. However, the main obstacle to using machine learning in this field appears to be a shortage of coding skills. And here comes the solution: Alteryx Predictive Modeling. The cardiac disease patient dataset will be carefully and interactively analyzed as part of this research project, It also utilizes Alteryx for automated data processing. The dataset contains the parameters which are diagnosed as major medical conditions contributing to cardiac arrest. Then, the machine model is trained and predictions are made with XGBoost Classifier. We've utilized Predictive Modeling with Alteryx, which works kind of like a semi-automated machine learning process because it allows choosing certain parameters manually and handles many machine learning tasks automatically, it really speeds up creating models. Plus, you don't need to write any code at all. Zeba Vishwakarma Abhishek Singh ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 2024-03-25 2024-03-25 12 Special Is 125 129 10.15680/IJIRCCE.2024.1203521