Preface
Page: ii-iii (2)
Author: Prasad Lokulwar, Basant Verma, N. Thillaiarasu, Kailash Kumar, Mahip Bartere and Dharam Singh
DOI: 10.2174/9879815079180122010002
Cutting Edge Techniques of Adaptive Machine Learning for Image Processing and Computer Vision
Page: 1-18 (18)
Author: P. Sasikumar* and T. Saravanan
DOI: 10.2174/9879815079180122010004
PDF Price: $15
Abstract
Computers, systems, applications, and technology, in general, are becoming more commonly used, advanced, scalable, and thus effective in modern times. Because of its widespread use, it undergoes various advancements on a regular basis. A fastpaced life is also associated with modern times. This way of life necessitates that our systems behave similarly. Adaptive Machine Learning (AML) can do things that conventional machine learning cannot. It will easily adjust to new information and determine the significance of that information. Adaptive machine learning uses a variety of data collection, grouping, and analysis methods due to its single-channeled structure. It gathers, analyses, and learns from the information. That is why it is adaptive: as long as new data is presented, the system can learn and update. This single-channeled device acts on any piece of input it receives in order to improve potential forecasts and outcomes. Furthermore, since the entire process happens in realtime, it can immediately adjust to new actions. High efficiency and impeccably precise accuracy are two of AML's main advantages. The system does not become outdated or redundant because it is constantly running in real-time. So, incorporating the three core concepts of agility, strength, and efficiency better explains AML.
Agility helps systems to respond rapidly and without hesitation. The systems achieve new levels of proficiency and accuracy as a result of their power, and they can find new ways to operate flawlessly at lower costs as a result of their performance. This chapter covers the preparation, regularisation, and structure of deep neural networks such as convolutional and generative adversarial networks. New information in the reinforcement learning chapter includes a description of t-SNE, a standard dimensionality reduction approach, as well as multilayer perceptrons on auto encoders and the word2vec network. As a consequence, these suggestions will assist readers in applying what they have learned.
Algorithm For Intelligent Systems
Page: 19-30 (12)
Author: Pratik Dhoke*, Pranay Saraf, Pawan Bhalandhare, Yogadhar Pandey, H.R. Deshmukh and Rahul Agrawal
DOI: 10.2174/9879815079180122010005
PDF Price: $15
Abstract
In the 21st-century, machines are becoming more and more intelligent. Terms like artificial intelligence, automation, robotics, etc., are becoming the new normal in today's tech-savvy world. All of this is made possible because of complex programs (algorithms) which are able to perform such difficult tasks.
Intelligent systems are self-taught machines intended for a particular task. Intelligence is the ability to learn and use new information and skills. As the person learns from past data, the system can be programmed using various algorithms to make it intelligent.
In this chapter, we will be discussing some of the algorithms in brief, like- Reinforcement learning, Game theory, Machine Learning, Decision Tree, Artificial Neural Networks, Swarm Intelligence, and Natural Language Processing and its applications.
Clinical Decision Support System for Early Prediction of Congenital Heart Disease using Machine learning Techniques
Page: 31-41 (11)
Author: Ritu Aggarwal* and Suneet Kumar
DOI: 10.2174/9879815079180122010006
PDF Price: $15
Abstract
One of the main reasons for deaths in children or low-age kids is congenital heart disease detected by CDSS (clinical decision support system). If it's diagnosed at an early stage, the significant results can be obtained for life-saving. The practitioners are not equally qualified and skilled so the detection of the disease and the proper diagnosis is delayed. The best prevention is the early detection of the symptoms of this disease. An automated medical diagnosis system is made to improve the accuracy and diagnose the disease. CHD expands the heart deformation as in newborn babies. Early detection of CHD is necessary to detect and diagnose this disease. Due to this, the life of a newborn child is in danger. By different detection methods, CHD could be accomplished by its clinical information using CDSS and it is also detected by its nonclinical data. In pregnant ladies, CHD is diagnosed by their non-clinical data by applying it to the newborn baby that is in their womb. Due to this, different machine learning algorithms, including K-NN and MLP, are explored. For CHD detection, dataset selection is a big issue, and it is utilized by the Support Vector Machine and random forest, K-NN, and MLP algorithms. This proposed work develops a decision support system to detect congenital heart disease. In this proposed work, the data mining techniques and the machine learning algorithms are used to gain insight into the system for their accuracy rate. This proposed work is designed and developed by the Python jupyter notebook to implement MLP. This paper presents an analysis using the machine learning algorithm to develop an accurate and efficient model for heart disease prediction. The MLP models have a high accuracy of 97%.
A Review on Covid-19 Pandemic and Role of Multilingual Information Retrieval and Machine Translation for Managing its Effect
Page: 42-58 (17)
Author: Mangala Madankar* and Manoj Chandak
DOI: 10.2174/9879815079180122010007
PDF Price: $15
Abstract
Novel Coronavirus disease 2019 (COVID-19) was initiated in the town of
Wuhan, Hubei Province, Central China, and has multiplied speedily to 215 nations to
date. Around 178,837,204 confirmed cases and 3880450 deaths had been reported
across the globe till 23 June 2021. The exceptional outburst of the 2019 novel
coronavirus called COVID-19 around the world has placed numerous governments in a
precarious position. Most of the governments found no solution except imposing
fractional or full lockdown. The laboratories grew rapidly across the globe to test and
confirm the rate of disease spread. The disease had adverse effects on the global
economy. This chapter focuses on using technology on languages such as NLP,
Multilingual Information Retrieval Systems, and Machine Translation to evaluate the
impact of covid-19 outbreaks and manage it.
An Empirical View of Genetic Machine Learning based on Evolutionary Learning Computations
Page: 59-75 (17)
Author: M. Chandraprabha and Rajesh Kumar Dhanaraj*
DOI: 10.2174/9879815079180122010008
PDF Price: $15
Abstract
The only prerequisite in the past era was human intelligence, but today's
world is full of artificial intelligence and its obstacles, which must still be overcome. It
could be said that anything from cars to household items must be artificially intelligent.
Everyone needs smartphones, vehicles, and machines. Some kind of intelligence is
required by all at all times. Since computers have become such an integral part of our
lives, it has become essential to develop new methods of human-computer interaction.
Finding an intelligent way of machine and user interaction is one of the most crucial
steps in meeting the requirement. The motivations for developing artificial intelligence
and artificial life can be traced back to the dawn of the computer era. As always,
evolution is a case of shifting phenomena. Adaptive computer systems are explicitly
designed to search for problem-specific solutions in the face of changing
circumstances. It has been said before that evolution is a massively parallel quest
method that never works on a single species or a single solution at any given time.
Many organisms are subjected to experiments and modifications. As a result, this
write-up aims to create Artificial Intelligence, superior to machine learning that can
master these problems, ranging from traditional methods of automatic reasoning to
interaction strategies with evolutionary algorithms. The result is evaluated with a piece
of code for predicting optimal test value after learning.
High-Performance Computing for Satellite Image Processing Using Apache Spark
Page: 76-91 (16)
Author: Pallavi Hiwarkar* and Mangala S. Madankar
DOI: 10.2174/9879815079180122010009
PDF Price: $15
Abstract
High-Performance Computing is the aggregate computing application that solves computational problems that are either huge or time-consuming for traditional computers. This technology is used for processing satellite images and analysing massive data sets quickly and efficiently. Parallel processing and distributed computing methods are very important to process satellite images quickly and efficiently. Parallel Computing is a computation type in which multiple processors execute multiple tasks simultaneously to rapidly process data using shared memory. In this, we process satellite image parallel in a single computer. In distributed computing, we use multiple systems to process satellite images quickly. With the help of VMware, we are creating a different operating system (like Linux, windows etc.) as a worker. In this project we are using cluster formation for connecting master and slave: apache spark is one of the important concepts in this project. Apache spark is one of the frameworks and Resilient Distributed Datasets are one of the concepts in the spark, we are using RDD for dividing dataset on the different node of the cluster.
Artificial Intelligence and Covid-19: A Practical Approach
Page: 92-109 (18)
Author: Md. Alimul Haque*, Shameemul Haque, Samah Alhazmi and D.N. Pandit
DOI: 10.2174/9879815079180122010010
PDF Price: $15
Abstract
An unprecedented outbreak of unknown aetiology pneumonia occurred in Wuhan of Hubei, China, in December 2019. The WHO reported a novel coronavirus causative agent outbreak with limited evidence of COVID-19. SARS-CoV-2 embodies the ssRNA genome containing 29891 nucleotides to encode 9860 amino acids and shows different types of mutations, such as D614G. The epidemic of this virus officially declared an emergency of International Concern by the WHO in January 2020. In the first week of April 2021, a new strain of coronavirus named N-440 was reported in Chandigarh, India. The number of cases of laboratory-confirmed coronavirus has risen at an unprecedented pace worldwide, with more than 132,573,231 cases currently confirmed, including 2,876,411 deaths as of April 06th 2021. The lack of funding to survive the epidemic of this virus, coupled with the concern of overloaded healthcare systems, has driven a lot of countries into a partial/total lockout situation. This epidemic has caused chaos, and a rapid therapy of the disease would be a therapeutic medication with experience of use in patients to overcome the current pandemic. In the recent global emergency, researchers, clinicians and public health care experts around the world continue to search for emerging technologies to help tackle the pandemic of this virus. In this chapter, we rely on numerous reputable sources to provide a detailed analysis of all the main pandemic relevant aspects. This research illustrates not only the immediate safety effects connected with the COVID-19 epidemic but also its impact on the global socioeconomy, education, social life and employment. Artificial Intelligence (AI) plays a significant supporting capacity in countering COVID-19 and may prompt arrangements quicker than we can, in any case, achieve in different zones and applications. With technological developments in AI combined with improved computing capacity, the repurposing of AI-enhanced medications may be useful in the cases of this virus. Artificial intelligence has gotten one of those advances which can undoubtedly distin- guish the transmission of this virus; exceptionally hazardous victims are recognized and are significant for constant control of that contamination. Artificial intelligence could genuinely assist us in battling against this infection through network testing, clinical administrations and advice on controlling diseases. This chapter addresses recent applications of AI in fighting the pandemics of this virus, e.g., monitoring of the epidemic, forecast of hazards, screening and diagnosis, improvement of medical treatment, fake news breaks, strengthening lockdowns, preventing cyber-attacks and finally, effective online education. This chapter will provide a clear definition and general understanding of the field of this virus pandemic and the role of AI to readers and researchers.
Intelligent Personalized E-Learning Platform using Machine Learning Algorithms
Page: 110-126 (17)
Author: Makram Soui*, Karthik Srinivasan* and Abdulaziz Albesher*
DOI: 10.2174/9879815079180122010011
PDF Price: $15
Abstract
Personalized learning is a teaching method that allows the content and
course of online training to be adapted according to the individual profile of learners.
The main task of adaptability is the selection of the most appropriate content for the
student in accordance with his digital footprint. In this work, we build a machine
learning model to recommend the appropriate learning resources according to the
student profile. To this end, we use Sequential forward selection (SFS) as a feature
selection technique with AdaBoost as a classifier. The obtained results prove the
efficiency of the proposed model with 91.33% of accuracy rate and 91.43% of
precision rate.
Automated Systems using AI in the Internet of Robotic Things: A New Paradigm for Robotics
Page: 127-144 (18)
Author: T. Saravanan* and P. Sasikumar
DOI: 10.2174/9879815079180122010012
PDF Price: $15
Abstract
The Internet of Things (IoT) allows a huge number of “things” with unique addresses to connect and exchange data through the current internet or suitable network protocols. This chapter proposes a new framework for controlling and monitoring activities at deployment sites and industrial automation systems, in which intelligent objects may follow peripheral occurrences, induce sensor data from a variety of sources, and apply ad hoc, local, and distributed “machine intelligence” to choose the optimal course of action, and then to act in a seamless manner to monitor or disseminate static or dynamic location conscious robotic things in the real world by giving the means to employ them as the Internet of robotic things (IoRT). While multirobotic systems have progressed, and robots are continuously being enriched by vertical robotic service, and simpler developing functionalities. For the constant and seamless support for which they were created, centric divisions are insufficient. The important aspects of IoRT are highlighted in this article, which includes efficient Coordination Algorithms for Multi Robot Systems, optimization of multi robot task allocation, and modelling and simulation of robot manipulators. The purpose of this chapter is to obtain a better knowledge of IoRT architectural assimilation and to identify key research goals in this field.
Missing Value Imputation and Estimation Methods for Arrhythmia Feature Selection Classification Using Machine Learning Algorithms
Page: 145-163 (19)
Author: Ritu Aggarwal* and Suneet Kumar
DOI: 10.2174/9879815079180122010013
PDF Price: $15
Abstract
Electrocardiogram signal analysis is very difficult to classify cardiac
arrhythmia using machine learning methods. The ECG datasets normally come with
multiple missing values. The reason for the missing values is the faults or distortion.
When performing data mining, missing value imputation is the biggest task for data
preprocessing. This problem could arise due to incomplete medical datasets if the
incomplete missing values and cases were removed from the original database. To
produce a good quality dataset for better analyzing the clinical trials, the suitable
missing value imputation method is used. In this paper, we explore the different
machine-learning techniques for the computed missing value in the electrocardiogram
dataset. To estimate the missing imputation values, the collected data contains feature
dimensions with their attributes. The experiments to compute the missing values in the
dataset are carried out by using the four feature selection methods and imputation
methods. The implemented results are shown by combined features using IG
(information gain), GA (genetic algorithm) and the different machine learning
classifiers such as NB (naïve bayes), KNN (K-nearest neighbor), MLP (Multilayer
perception), and RF (Random forest). The GA (genetic algorithm) and IG (information
gain) are the best suitable methods for obtaining the results on lower dimensional
datasets with RMSE (Root mean square error. It efficiently calculates the best results
for missing values. These four classifiers are used to analyze the impact of imputation
methods. The best results for missing rate 10% to 40% are obtained by NB that is
0.657, 0.6541, 0.66, 0.657, and 0.657, as computed by RMSE (Root mean Square
error). It means that error will efficiently reduced by naïve bayes classifier.
Analysis of Abstractive Text Summarization with Deep Learning Technique
Page: 164-196 (33)
Author: Shruti J. Sapra Thakur and Avinash S. Kapse*
DOI: 10.2174/9879815079180122010014
PDF Price: $15
Abstract
In today's era, data in textual format has got great importance and is used to extract useful information from this data to design various kinds of information systems such as Document Generation, Prediction systems, Report Generation, Recommendation Systems, and Language modeling, and many more. That is why such techniques are very important, which will reduce the amount of data while saving the information and various parameters concerning this information. One such technique is text summarization which retains essential and useful information. This technique is very simple and convenient as compared to other techniques of summarization. For processing data, the Apache tool of Kafka is used. This platform is useful for real-time streaming data pipelines and many applications related to it. With this, one can use APIs of native Apache Kafka to populate data lakes, stream variants to and from databases, and power machine learning and analytically carry out. The input portion in this situation is a spark base platform for analytics. For the fast development of workflows for complex machine learning systems, Tensorflow is evolved as a significant library of machine learning.
Advanced Topics in Machine Learning
Page: 197-212 (16)
Author: Sana Zeba*, Md. Alimul Haque, Samah Alhazmi and Shameemul Haque
DOI: 10.2174/9879815079180122010015
PDF Price: $15
Abstract
This chapter reveals the infancy of the striking experience near the “Internet of Things (IoT)”. Machine learning technology is a part of Artificial Intelligence that grew from the training of computational learning approaches and pattern recognition of artificial intelligence. Over the last few years, Machine Learning approaches have been advanced inquisitively for various sectors such as smart city, finance, banking, education, etc. Today machine learning is not similar to the previous machine learning because of various new advanced computing techniques. Machine learning technique is defined as data analysis that automates the building of analytical models. The iterative factor of learning algorithms is significant as models are uncovered to new datasets; they are skilled in autonomously adjusting. The study from earlier computations generates reliable, efficient, repeatable decisions and experiment results. Therefore, Machine Learning measures have been used to protect various smart applications from any illegal activities, threats, and various attacks. Furthermore, Machine Learning provided suitable solutions for preserving the security of various advanced applications. The patent growth rate is 34% in the machine learning field from the year 2013 to 2017, according to Patent Service IFI Claims. Also, in the world, 60% of companies are using various learning algorithms for numerous purposes. In this chapter, we have discussed efficient, advanced, and revolutionary machine learning algorithms in detail.
Introduction
This book is a quick review of machine learning methods for engineering applications. It provides an introduction to the principles of machine learning and common algorithms in the first section. Proceeding chapters summarize and analyze the existing scholarly work and discuss some general issues in this field. Next, it offers some guidelines on applying machine learning methods to software engineering tasks. Finally, it gives an outlook into some of the future developments and possibly new research areas of machine learning and artificial intelligence in general. Techniques highlighted in the book include: Bayesian models, support vector machines, decision tree induction, regression analysis, and recurrent and convolutional neural network. Finally, it also intends to be a reference book. Key Features: - Describes real-world problems that can be solved using machine learning - Explains methods for directly applying machine learning techniques to concrete real-world problems - Explains concepts used in Industry 4.0 platforms, including the use and integration of AI, ML, Big Data, NLP, and the Internet of Things (IoT). - It does not require prior knowledge of the machine learning. This book is meant to be an introduction to artificial intelligence (AI), machine earning, and its applications in Industry 4.0. It explains the basic mathematical principles but is intended to be understandable for readers who do not have a background in advanced mathematics.