Preface
Page: i-ii (2)
Author: Abhijit Banubakode, Sunita Dhotre, Chhaya S. Gosavi, G. S. Mate, Nuzhat Faiz Shaikh and Sandhya Arora
DOI: 10.2174/9789815179606124010001
Artificial Taste Perception of Tea Beverage Using Machine Learning
Page: 1-26 (26)
Author: Amruta Bajirao Patil and Mrinal Rahul Bachute*
DOI: 10.2174/9789815179606124010003
PDF Price: $15
Abstract
Nowadays, an artificial perception of beverages is in high demand as
working hours increase, and people depend on readymade food and beverages. An
assurance of quality, safety, and edibility of food and drink products is essential both
for food producers and consumers. Assurance of unique beverage taste and consistent
taste uniformity creates a distinct identity in the market. India is the second largest tea
producer country in the world. Based on geographic location, the tea has a specific
flavor and aroma. Artificial Intelligence (AI) can contribute to the feature identification
and grading of tea species. The taste, aroma, and color are the three main attributes that
can be sensed with the help of E-tongue, E-nose and E-vision, and can be processed
further for automatic tea grading. The various potentiometric, voltammetric, Metal
Oxide Semiconductor (MOS) and acoustic sensors are available with Principal
Component Analysis (PCA). For tea analysis, various reviews are mentioned, like User
Experience (UX evaluation, literature review, bibliometric review, and patent review.
An in-depth analysis of artificial taste perception using machine learning has been
described in the chapter. The topic covered almost all possible approaches to the
artificial perception of tea with various interesting facts.
Significance of Evolutionary Artificial Intelligence: A Detailed Overview of the Concepts, Techniques, and Applications
Page: 27-53 (27)
Author: Ashish Tripathi*, Rajnesh Singh, Arun Kumar Singh, Pragati Gupta, Siddharth Vats and Manoj Singhal
DOI: 10.2174/9789815179606124010004
PDF Price: $15
Abstract
An evolutionary algorithm (EA) is known as a subset of evolutionary
computation. It is inspired by natural evolution and applies natural phenomena to
search for the optimal solution. Its parallel search capability and randomized nature
enable it to be effective and unique in solving different real-world problems in
comparison to existing classical optimization algorithms. The evolutionary algorithm
applies biological techniques such as selection, reproduction, and mutation to solve
complex problems. It starts with a random population of candidate solutions and
applies biological techniques to every generation until feasible solutions are obtained.
The only fit solutionis intelligence (AI) simulation human intelligence in machines.
Machines are programmed enough to think like humans and imitate their actions. AI
based models are developed to provide new solutions to real-world problems. As realworld problems are very complex, the desired solutions for such problems are required
to be explored in complex, high-dimensional, and very large search spaces. In this
context, nature inspired and population based evolutionary techniques are the most
suitable approach to find the optimal solution. The nature-inspired evolutionary
techniques follow the natural phenomenon and these phenomenon helps to search for
the desired optimal solution when the direction of the search is allowed to survive and
continue to move in further generations to determine the optimal solution. Artificial not
known at the beginning. So, “Evolutionary Artificial Intelligence (EAI)” is the term
that presents the combination of human intelligence and natural phenomenon-based
solutions to real-world complex problems. This chapter covers an overview of
optimization techniques, artificial intelligence, and evolutionary computation in detail. A detailed discussion on evolutionary artificial intelligence, followed by applications of
evolutionary machine learning is also presented. After that, the significance of
evolutionary artificial intelligence in decision making has been discussed. Finally, the
conclusion has been given, which shows the summary of the chapter.
Impact of Deep Learning on Natural Language Processing
Page: 54-75 (22)
Author: Arun Kumar Singh*, Ashish Tripathi, Sandeep Saxena, Pushpa Choudhary, Mahesh Kumar Singh and Arjun Singh
DOI: 10.2174/9789815179606124010005
PDF Price: $15
Abstract
In the era of digitalization, electronic gadgets such as Google Translate, Siri,
and Alexa have at least one characteristic: They are all the products of natural language
processing (NLP). “Natural Language” refers to a human language used for daily
communication, such as English, Hindi, Bengali, etc. Natural languages, as opposed to
artificial languages such as computer languages and mathematical nomenclature, have
evolved as they have been transmitted from generation to generation and are
challenging to explain with clear limits in the first instance. In natural language
processing, artificial intelligence (Singh et al., 2021), linguistics, information
processing, and cognitive science are all related fields (NLP). NLP aims to use
intelligent computer techniques to process human language. However, NLP
technologies such as voice recognition, language comprehension, and machine
translation exist. With such limited obvious exclusions, machine learning algorithms in
NLP sometimes lacked sufficient capacity to consume massive amounts of training
data. In addition, the algorithms, techniques, and infrastructural facilities lack enough
strength.
Humans design features in traditional machine learning, and feature engineering is a
limitation that requires significant human expertise. Simultaneously, the accompanying
superficial algorithms lack depiction capability and, as a result, the ability to generate
layers of duplicatable concepts that would naturally separate intricate aspects in
forming visible linguistic data. Deep learning overcomes the challenges mentioned
earlier by using deep, layered modelling architectures, often using neural networks and
the corresponding full-stack learning methods.
Deep learning has recently enhanced natural language processing by using artificial
neural networks based on biological brain systems and Backpropagation. Deep learning
approaches that use several processing layers to develop hierarchy data representations have produced cutting-edge results in various areas. This chapter introduces natural language processing (NLP) as an AI component. The history of NLP is next.
Distributed language representations are the core of NLP's profound learning
revolution. After the survey, the boundaries of deep learning for NLP are investigated.
The paper proposes five NLP scientific fields.
A Review on Categorization of the Waste Using Transfer Learning
Page: 76-91 (16)
Author: Krantee M. Jamdaade, Mrutunjay Biswal* and Yash Niranjan Pitre
DOI: 10.2174/9789815179606124010006
PDF Price: $15
Abstract
In this paper, we have aimed to develop a system that will help waste
collectors segregate different types of waste without needing much human intervention.
We have experimented with various deep learning and transfer learning techniques to
determine which model is more suited for this purpose. The dataset we used contained
8369 images that are classified into 9 classes: batteries, clothes, e-waste, glass, light
bulbs, metal, organic, paper, and plastic. We used models like VGG16, Inceptionv3,
ResNet50, MobileNET, NASNetMobile and Xception. We have also conducted a
survey to know about the waste management habits of the respondents. Our
experiments showed that models like MobileNET gave us the best accuracy of 93.17%
and identified all the waste categories correctly and the Xception model predicted
images correctly with the use of both Adam and Adadelta.
Automated Bird Species Identification using Audio Signal Processing and Neural Network
Page: 92-107 (16)
Author: Samruddhi Bhor*, Rutuja Ganage, Hrushikesh Pathade, Omkar Domb and Shilpa Khedkar
DOI: 10.2174/9789815179606124010007
PDF Price: $15
Abstract
Many bird species are rare nowadays, and when they are found, they are
difficult to classify. As an example, in various scenarios, birds include different sizes,
forms, colors, and a person's viewpoint from different angles. Although domain
specialists can classify birds manually, with increasing volumes of data, this becomes a
tiresome and time-consuming procedure. Using our approach, we can reliably and
quickly identify bird species. It is now feasible to track the number of birds as well as
their activity using automated bird species recognition and machine learning
algorithms. Convolutional neural networks (CNN) were chosen above standard
classifiers such as SVM, Random Forest, and SMACPY. For this system, we used the
“BirdCLEF 2021” dataset from Kaggle. The input dataset will be preprocessed, which
will involve framing, silence removal, and reconstruction, which will be supplied as
input to a convolutional neural network, followed by CNN modification, testing, and
classification. To avoid overfitting, we add a dropout layer. Preprocessing includes
importing the Librosa library. MFCC is a program that extracts distinct characteristics
from audio files (Mel-Frequency-Cepstral-Coefficients). The MFCC summarizes the
frequency distribution over the window size, allowing for sound frequency and
temporal analysis. The result is then compared with respect to the pre-trained data, and
output is shown, and birds are classified based on their classes.
Powering User Interface Design of Tourism Recommendation System with AI and ML
Page: 108-135 (28)
Author: P. M. Shelke, Suruchi Dedgaonkar* and R. N. Bhimanpallewar
DOI: 10.2174/9789815179606124010008
PDF Price: $15
Abstract
The term “User Experience” (UX) refers to all elements of a customer's
relationship with a company, including its services, products, and overall customer
experience. Meeting the specific consumer demands and knowing their behavioral
patterns are the most important criteria for an efficient UX.
The backend that selects what to recommend and the frontend that gives the
recommendation are the two essential components of recommendation systems (RS).
An RS's user interface must deliver recommendations in a way that allows users to
anticipate taking action on them. A user interface is required to provide the
recommendations. When creating a recommender's user interface, the designers must
make several decisions. Understandability, transparency, assessability, trust, and
timeliness are five elements that the designer must address.
When it comes to organizing a trip, people are becoming increasingly accustomed to
using modern technology. Users are provided with a large quantity of data, which they
must evaluate in order to choose the offerings that are interesting or appropriate for
them. A customized tourist attractions recommender system is thought to be the most
efficient way for visitors to find tourist attractions. The recommender system compares
the acquired data to comparable and dissimilar data from other sources to provide a list
of recommended tourist sites.
These systems, which assist people in finding what they need on the internet, have been
a huge success, and they wouldn't be conceivable without an excellent user interface.
Data can now be easily segmented based on demographics, habits, trends, and a variety
of other factors, thanks to the application of machine learning and AI. The main
concept is to provide each user with better strategic decisions to their preferences based
on their prior travel data and behavior. In this way, every facet of human behavior that
these systems supply and explore is then fed into algorithms, which develop
meaningful patterns. These patterns are then expressed through an interface and then
transformed into useful products and services that help businesses improve their user
experience.
Both AI and machine learning are extremely compatible and friendly with UX; they all
follow the same concepts and aims. However, there are many challenges to their
implementation. AI/ML engineers and UX designers should collaborate on a shared
platform to create a blueprint for a fantastic UX experience. The mix of qualitative and
quantitative data is crucial if AI and machine learning connect with UX. There is no
other technology that can improve UX as much as AI.
Exploring the Applications of Complex Adaptive Systems in the Real World: A Review
Page: 136-160 (25)
Author: Ajinkya Kunjir*
DOI: 10.2174/9789815179606124010009
PDF Price: $15
Abstract
Complex Adaptive Systems (CAS) are gradually becoming the primary
modelling framework in the industry where autonomously evolving and self-adaptive
systems exist. There has been a quick escalation in the research results in the past few
years as the concept of CAS is emerging in the working sectors due to its capabilities
and vital properties to shape an organizational workflow. CAS exhibits selforganization, adaptability, modularity, and others beyond complex systems. Designing
CAS models is a tedious task as the intra-system components are composed of subcomponents that interact with running operations across the system. Researchers in
engineering, healthcare, defense and military automation are extensively progressing in
adapting the CAS framework and conceptualizing the systems for increasing
performance efficiency. This paper primarily argues for the relevance and value of the
CAS approach and then presents a detailed discussion on the core concepts of CAS and
Agent-based modelling, highlighting the difference between them. Furthermore, the
paper provides a detailed review of the applications of CAS, such as Manufacturing
(Assembly systems), Defense and Analysis, Internet of Things (IoT), Distributed
Networks, Healthcare organizations and a few social-ecological systems (SES). Many
pieces of software agent-based modelling, tools for CAS development and data
visualizations are surveyed and discussed in the second half of the paper.
Insights into Deep Learning and Non-Deep Learning Techniques for Code Clone Detection
Page: 161-173 (13)
Author: Ajinkya Kunjir*
DOI: 10.2174/9789815179606124010010
PDF Price: $15
Abstract
A source code clone is a type of bad smell caused by pieces of code that
have the same functional semantics, but the syntactical representation varies. In the
past few years, there have been several studies about code clone detection, steered by
numerous machine learning models, software techniques and other mathematical
measures. This paper aims to conduct an impartial comparative study of the existing
literature on Deep Learning and Non-Deep Learning techniques. Due to the lack of
work in studying the previous and the current state-of-the-art tools in code clone
detection, there is no concrete evidence found to underpin the use of Deep Learning
approaches in clone detection, except for a preference from the evolutionary point of
view. We will address and investigate a few research questions related to the intentions
of using DL techniques for code clone detection compared to those of non-DL
approaches (Based on –token, text, AST, metrics, and others). Furthermore, we will
discuss the challenges faced in the Deep Learning implementation for clone detection
and their potential resolutions if feasible. This review would help the audience
understand how different approaches aid the clone detection process along with their
performance measures, limitations, issues, and challenges.
Application Using Machine Learning to Predict Child’s Health
Page: 174-190 (17)
Author: Saurabh Kolapate*, Tejal Jadhav and Nikhita Mangaonkar
DOI: 10.2174/9789815179606124010011
PDF Price: $15
Abstract
Nowadays, modern technologies are applied in many different areas of
medical science. One such piece of technology that helps in the diagnosis of numerous
illnesses and infections is the expert system. The Medical Expert System was created to
assist doctors in making diagnoses and to make it easier for the public to recognize
disorders. To diagnose the user, it treats facts and symptoms as queries or inputs. This
suggests that a medical expert system makes a diagnosis based on information about
the patient and knowledge of the diseases. Designing an Expert System for disease
diagnosis in youngsters up to the age of 16 is the main goal of this project. Python,
Java, and Flask are all used as computer programming languages. The selected
symptoms supplied as the question will enable the expert system to correctly diagnose
these diseases. With this discovery, we think the creation of an expert system will be
advantageous for disease diagnosis in pediatric instances and will also become more
affordable. Multiple tests are necessary for the diagnosis of disorders in children
because the symptoms might occasionally be deceiving. In these situations, an expert
system can aid in identifying and treating the true issue. In addition to a prescription, it
can diagnose the illness and offer information.
Shifting from Red AI To Green AI
Page: 191-209 (19)
Author: Samruddhi Shetty, Nirmala Joshi* and Abhijit Banubakode
DOI: 10.2174/9789815179606124010012
PDF Price: $15
Abstract
The 2020s may see amazing advances in AI, however, as far as the
foundation and proficient utilization of energy is concerned, we have not reached the
optimized level. As AI research advances, we should demand the best platform,
methodologies, and tools for building AI models. Organizations heavily rely on AI for
various activities today, with only 7% of businesses trying to discover the facts related
to the problem that a bigger carbon footprint is left by AI, with the training process for
several large AI models emitting as much as 626,000 pounds of carbon dioxide
equivalent to the lifetime carbon footprint of nearly 3640 iPhones. As we know,
algorithmic training is an endless process for AI-powered tools, as a result, growing
reliance on AI only speeds up the death of the immediate environment. The awareness
among the people about how AI can impact the sustainability of the environment in the
near future is not considered while developing the solution. Apart from big players in
the market, the cognizant of sustainable and responsible AI is still a big question mark.
The end user or the consumers should be aware of the services they use, whether they
are only accurate or efficient as well. The balance between both factors should be
maintained based on the context and requirement. The same is studied in the given
paper concerning the concept of Red AI and Green AI and how they should be
balanced considering the environmental sustainability factor.
Knowledge Representation in Artificial Intelligence - A Practical Approach
Page: 210-222 (13)
Author: Vandana C. Bagal*, Archana L. Rane, Debam Bhattacharya, Abhijeet Banubakode and Vishwanath S. Mahalle
DOI: 10.2174/9789815179606124010013
PDF Price: $15
Abstract
In the realm of artificial intelligence, knowledge representation is a vital
aspect that enables effective information sharing and processing. Humans excel at
sharing trusted information, which is acquired through rigorous testing and validation,
resulting in what we commonly refer to as knowledge. The representation of
knowledge can take various forms, such as graphs, maps, or textual formats. With the
continuous evolution of the IT sector, the introduction of AI has simplified many tasks,
often surpassing human capabilities and effortlessly handling even the most basic
activities. However, understanding the concept of knowledge representation remains a
fundamental question. In this research paper, we delve into the basics of knowledge
representation to directly address this question. The understanding of knowledge
representation is best achieved by examining the role knowledge plays in specific case
studies or systems, which includes scientific reasoning and comprehension of the
world. By exploring the intricacies of knowledge representation, we aim to provide a
practical approach to its implementation in the field of artificial intelligence.
File Content-based Malware Classification
Page: 223-240 (18)
Author: Mahendra Deore* and Chhaya S. Gosavi
DOI: 10.2174/9789815179606124010014
PDF Price: $15
Abstract
Malicious Software (MALWARE) is a serious threat to system security the
moment any electronic gadget or ‘Thing’ is connected to the World Wide Web
(WWW). The malware - stealthy software that is used to collect sensitive information
gains access to private systems and can disrupt device operation. Thus, malware acts
against the user requirement and is a threat to all operating systems (OS), but more to
Windows and Android systems, as those are the most widely used OS. Malware
developers try to invade the system by means of viruses, adware, spyware,
ransomware, botware, Trojans, etc. Developers try different anti-forensic techniques so
that malware cannot be detected or investigated. Malware developers typically play
‘peekaboo’ with the malware investigators. The result is that investigating such attacks
becomes more complex, and many times it fails because of immature forensics
methodology or a lack of appropriate tools. This chapter is the first step towards
analysing malware. The process started with malware dataset collection and
understanding the same. ML has two basic blocks, i.e., feature extraction and
classification. In the case of supervised learning, this feature plays a significant role.
This asks for understanding features and their effect on classification, which was a
major task. Two separate experimental processes were explored. The first one involved
extracting n-grams from the binary files using the kfNgram tool, and the second one
used a shell script to parse the assembly files for method calls to external API libraries.
Several supervised machine learning classifiers like Decision Trees, SVM, and Naive
Bayes were used to classify the malware family based on extracted features. We
proposed a method to classify malware into nine families as per the Kaggle dataset. It
analyses the n-gram of the malware file to generate the feature vector. Here, the value
of ’n’ in n-gram is selectable; presently, it is four. The objective was to extract highly
probable n-grams from the binary files after pre-processing, i.e., calculating the IG
parameter. The present threshold for selecting n-gram from the top-most lists is five
hundred. It has been observed that SVM and Decision trees provide accuracy on the
scale of 98%. Nevertheless, there are chances of improvement as there is a probability
of selecting irrelevant n-grams due to the sequential selection of n-grams. This method
is considered a starting point for malware classification.
Enhancing Efficiency in Content-based Image Retrieval System Using Pre-trained Convolutional Neural Network Models
Page: 241-262 (22)
Author: Vishwanath S. Mahalle, Narendra M. Kandoi, Santosh B. Patil, Abhijit Banubakode and Vandana C. Bagal
DOI: 10.2174/9789815179606124010015
PDF Price: $15
Abstract
Traditionally, image retrieval is done using a text-based approach. In the
text-based approach, the user must query metadata or textual information, such as
keywords, tags, or descriptions. The effectiveness and utility of this approach in the
digital realm for solving image retrieval problems are limited. We introduce an
innovative method that relies on visual content for image retrieval. Various visual
aspects of the image, including color, texture, shape, and more, are employed to
identify relevant images. The choice of the most suitable feature significantly
influences the system's performance. Convolutional Neural Network (CNN) is an
important machine learning model. Creating an efficient new CNN model requires
considerable time and computational resources. There are many pre-trained CNN
models that are already trained on large image datasets, such as ImageNet containing
millions of images. We can use these pre-train CNN models by transferring the learned
knowledge to solve our specific content-based image retrieval talk.
In this chapter, we propose an efficient pre-trained CNN model for content-based
image retrieval (CBIR) named as ResNet model. The experiment was conducted by
applying a pre-trained ResNet model on the Paris 6K and Oxford 5K datasets. The
performance of similar image retrieval has been measured and compared with the stateof-the-art AlexNet model. It is found that the AlexNet architecture takes a longer time
to get more accurate results. The ResNet architecture does not need to fire all neurons
at every epoch. This significantly reduces training time and improves accuracy. In the
ResNet architecture, once the feature is extracted, it will not extract the feature again. It
will try to learn a new feature. To measure its performance, we used the average mean
precision. We obtained the result for Paris6K 92.12% and Oxford5K 84.81%. The
Mean Precision at different ranks, for example, at the first rank in Paris6k, we get
100% result, and for Oxford5k, we get 97.06%.
Role of Artificial Intelligence (AI) in Solid Waste Management: A Synopsis
Page: 263-279 (17)
Author: Pankaj Bhattacharjee* and Ashok B. More
DOI: 10.2174/9789815179606124010016
PDF Price: $15
Abstract
Rapid urbanisation and subsequent growth in population have brought about
a worldwide spike in the production of municipal solid waste. Poor management skills
in the collection of waste and the improper allocation of transporting vehicles due to
lack of technology usage, also, insufficient funds and inappropriate management of
human resources have contributed towards making Municipal Solid Waste
Management (MSWM) which is a big challenge nowadays, especially in developing
countries. Scientists and researchers have been working towards developing cuttingedge technology in order to address this issue. Modern Artificial Intelligence (AI)
technologies are being studied for their potential usefulness in the Solid Waste
Management (SWM) industry. Waste management as a whole, including collection,
transportation, and sorting, may benefit substantially from the intelligent use of AI
algorithms. This article provides a brief overview of the way in which Machine
Learning (ML) algorithms are used in MSWM across the whole process, from the
initial creation of waste through its collection, transportation, and ultimate disposal.
Subject Index
Page: 280-285 (6)
Author: Abhijit Banubakode, Sunita Dhotre, Chhaya S. Gosavi, G. S. Mate, Nuzhat Faiz Shaikh and Sandhya Arora
DOI: 10.2174/9789815179606124010017
Introduction
Artificial Intelligence, Machine Learning and User Interface Design is a forward-thinking compilation of reviews that explores the intersection of Artificial Intelligence (AI), Machine Learning (ML) and User Interface (UI) design. The book showcases recent advancements, emerging trends and the transformative impact of these technologies on digital experiences and technologies. The editors have compiled 14 multidisciplinary topics contributed by over 40 experts, covering foundational concepts of AI and ML, and progressing through intricate discussions on recent algorithms and models. Case studies and practical applications illuminate theoretical concepts, providing readers with actionable insights. From neural network architectures to intuitive interface prototypes, the book covers the entire spectrum, ensuring a holistic understanding of the interplay between these domains. Use cases of AI and ML highlighted in the book include categorization and management of waste, taste perception of tea, bird species identification, content-based image retrieval, natural language processing, code clone detection, knowledge representation, tourism recommendation systems and solid waste management. Advances in Artificial Intelligence, Machine Learning and User Interface Design aims to inform a diverse readership, including computer science students, AI and ML software engineers, UI/UX designers, researchers, and tech enthusiasts.