Machine Intelligence, Big Data Analytics, and IoT in Image Processing (eBook)
Discusses both theoretical and practical aspects of how to harness advanced technologies to develop practical applications such as drone-based surveillance, smart transportation, healthcare, farming solutions, and robotics used in automation.
The concepts of machine intelligence, big data analytics, and the Internet of Things (IoT) continue to improve our lives through various cutting-edge applications such as disease detection in real-time, crop yield prediction, smart parking, and so forth. The transformative effects of these technologies are life-changing because they play an important role in demystifying smart healthcare, plant pathology, and smart city/village planning, design and development. This book presents a cross-disciplinary perspective on the practical applications of machine intelligence, big data analytics, and IoT by compiling cutting-edge research and insights from researchers, academicians, and practitioners worldwide. It identifies and discusses various advanced technologies, such as artificial intelligence, machine learning, IoT, image processing, network security, cloud computing, and sensors, to provide effective solutions to the lifestyle challenges faced by humankind.
Machine Intelligence, Big Data Analytics, and IoT in Image Processing is a significant addition to the body of knowledge on practical applications emerging from machine intelligence, big data analytics, and IoT. The chapters deal with specific areas of applications of these technologies. This deliberate choice of covering a diversity of fields was to emphasize the applications of these technologies in almost every contemporary aspect of real life to assist working in different sectors by understanding and exploiting the strategic opportunities offered by these technologies.
Audience
The book will be of interest to a range of researchers and scientists in artificial intelligence who work on practical applications using machine learning, big data analytics, natural language processing, pattern recognition, and IoT by analyzing images. Software developers, industry specialists, and policymakers in medicine, agriculture, smart cities development, transportation, etc. will find this book exceedingly useful.
Ashok Kumar, PhD, is an assistant professor at Lovely Professional University, Phagwara, Punjab, India. He has 15+ years of teaching and research experience, filed 3 patents, and published many articles in international journals and conferences. His current areas of research interest include cloud computing, the Internet of Things, and mist computing.
Megha Bhushan, PhD, is an assistant professor at the School of Computing, DIT University, Dehradun, Uttarakhand, India. She has filed 4 patents and published many research articles in international journals and conferences. Her research interest includes software quality, software reuse, ontologies, artificial intelligence, and expert systems.
Jose Galindo, PhD, is currently in the Department of Computer Languages and Systems, University of Seville, Spain. He has developed many tools such as FaMa, FaMaDEB, FaMaOVM, TESALIA, and VIVID, and his research interests include recommender systems, software visualization, variability-intensive systems, and software product lines.
Lalit Garg, PhD, is a Senior Lecturer in the Department of Computer Information Systems, University of Malta, and an honorary lecturer at the University of Liverpool, UK. He has edited four books and published over 110 papers in refereed journals, conferences, and books. He has 12 patents and delivered more than twenty keynote speeches in different countries, and organized/chaired/co-chaired many international conferences.
Yu-Chen Hu, PhD, is a distinguished professor in the Department of Computer Science and Information Management, Providence University, Taichung City, Taiwan. His research interests include image and signal processing, data compression, information hiding, information security, computer network, and artificial network.
MACHINE INTELLIGENCE, BIG DATA ANALYTICS, AND IoT IN IMAGE PROCESSING Discusses both theoretical and practical aspects of how to harness advanced technologies to develop practical applications such as drone-based surveillance, smart transportation, healthcare, farming solutions, and robotics used in automation. The concepts of machine intelligence, big data analytics, and the Internet of Things (IoT) continue to improve our lives through various cutting-edge applications such as disease detection in real-time, crop yield prediction, smart parking, and so forth. The transformative effects of these technologies are life-changing because they play an important role in demystifying smart healthcare, plant pathology, and smart city/village planning, design and development. This book presents a cross-disciplinary perspective on the practical applications of machine intelligence, big data analytics, and IoT by compiling cutting-edge research and insights from researchers, academicians, and practitioners worldwide. It identifies and discusses various advanced technologies, such as artificial intelligence, machine learning, IoT, image processing, network security, cloud computing, and sensors, to provide effective solutions to the lifestyle challenges faced by humankind. Machine Intelligence, Big Data Analytics, and IoT in Image Processing is a significant addition to the body of knowledge on practical applications emerging from machine intelligence, big data analytics, and IoT. The chapters deal with specific areas of applications of these technologies. This deliberate choice of covering a diversity of fields was to emphasize the applications of these technologies in almost every contemporary aspect of real life to assist working in different sectors by understanding and exploiting the strategic opportunities offered by these technologies. Audience The book will be of interest to a range of researchers and scientists in artificial intelligence who work on practical applications using machine learning, big data analytics, natural language processing, pattern recognition, and IoT by analyzing images. Software developers, industry specialists, and policymakers in medicine, agriculture, smart cities development, transportation, etc. will find this book exceedingly useful.
Ashok Kumar, PhD, is an assistant professor at Lovely Professional University, Phagwara, Punjab, India. He has 15+ years of teaching and research experience, filed 3 patents, and published many articles in international journals and conferences. His current areas of research interest include cloud computing, the Internet of Things, and mist computing. Megha Bhushan, PhD, is an assistant professor at the School of Computing, DIT University, Dehradun, Uttarakhand, India. She has filed 4 patents and published many research articles in international journals and conferences. Her research interest includes software quality, software reuse, ontologies, artificial intelligence, and expert systems. Jose Galindo, PhD, is currently in the Department of Computer Languages and Systems, University of Seville, Spain. He has developed many tools such as FaMa, FaMaDEB, FaMaOVM, TESALIA, and VIVID, and his research interests include recommender systems, software visualization, variability-intensive systems, and software product lines. Lalit Garg, PhD, is a Senior Lecturer in the Department of Computer Information Systems, University of Malta, and an honorary lecturer at the University of Liverpool, UK. He has edited four books and published over 110 papers in refereed journals, conferences, and books. He has 12 patents and delivered more than twenty keynote speeches in different countries, and organized/chaired/co-chaired many international conferences. Yu-Chen Hu, PhD, is a distinguished professor in the Department of Computer Science and Information Management, Providence University, Taichung City, Taiwan. His research interests include image and signal processing, data compression, information hiding, information security, computer network, and artificial network.
1
Deep Learning Techniques Using Transfer Learning for Classification of Alzheimer’s Disease
Monika Sethi1, Sachin Ahuja2* and Puneet Bawa1
1 Chitkara University Institute of Engineering & Technology, Chitkara University, Punjab, India
2 ED-Engineering at Chandigarh University, Punjab, India
Abstract
Alzheimer’s disease (AD) is a severe disorder in which brain cells degenerate, increasing memory loss with treatment choices for AD symptoms varying based on the disease’s stage, and as the disease progresses, individuals at certain phases undergo specific healthcare. The majority of existing studies make predictions based on a single data modality either they utilize magnetic resonance imaging (MRI)/positron emission tomography (PET)/diffusion tensor imaging (DTI) or the combination of these modalities. However, a thorough understanding of AD staging assessment can be achieved by integrating these data modalities and performance could be further enhanced using a combination of two or more modalities. However, deep learning techniques trained the network from scratch, which has the following drawbacks: (a) demands an enormous quantity of labeled training dataset that could be a problem for the medical field where physicians annotate the data, further it could be very expensive, (b) requires a huge amount of computational resources. (c) These models also require tedious and careful adjustments of numerous hyper-parameters, which results to under or overfitting and, in turn, to degraded performance. (d) With a limited medical training data set, the cost function might get stuck in a local-minima problem. In this chapter, a study is done based on the models used for AD diagnosis. Many researchers fine-tuned their networks instead of scratch training and utilized CaffeNet, GoogleNet, VGGNet-16, VGGNet-19, DenseNet with varying depths, Inception-V4, AlexNet, ResNet-18, ResNet-152, or even ensemble transfer-learning models pretrained on generalized images for AD classification performed better.
Keywords: Alzheimer disease, transfer learning, deep learning, parameter optimization
1.1. Introduction
In the United States, AD is the most widespread neurodegenerative condition and the sixth major cause of fatalities. The global disease burden of AD is expected to exceed $2 trillion by 2030, requiring preventative care [1]. Despite the tremendous study and advancements in clinical practice, nearly half of AD patients are correctly identified for anatomy and progression of the disease based on medical indicators. The existence of neu-rofibrillary tangles and amyloid plaques in histology is the most definitive evidence for AD. Consequently, the presence of plaque is not associated with the beginning of AD, but rather with sensory and neuron damage. Dr. Alois Alzheimer (a psychiatrist and neuropsychologist) was the origin for the naming of this disease, who studied the brain of a 51-year-old woman who died of severe cognitive impairment in 1906 [2]. Dr. Alois investigated her brain and discovered clumps, which were actually the accumulation of proteins in and around the neurons, resulting in their loss. The key characteristics for identifying or confirming the existence of the illness are shrinkage of the hippocampus and cerebral cortex, as well as growth of the ventricles. The hippocampus is essential in learning and memory, in addition to acting as a connection between the central nervous system of the body’s organs. AD eventually destroys the portion of the brain that controls heart and respiratory activity, resulting in death [3].
Unfortunately, AD does not yet have a definitive cure [4]. Instead, the objective is to reduce the illness’s development, treat suffering, manage learning disabilities, and enhance the quality of life. Clinical trials, on the other hand, can significantly slow down the progression of psychiatric disorders if the diagnosis is made early. Whereas more psychological therapies and, eventually, prevention or even a cure are essential (long-term) goals, early diagnosis may result in improved treatment outcomes benefits for diseased. Except in a few cases where genetic abnormalities may be identified, the precise cause of AD is still obscure.
The assessment of empirical biomarkers is necessary for the early treatment of disease [5]. A number of noninvasive neuroimaging approaches, including computed tomography (CT) scans, both structural and functional MRI and PET, have been explored for the prediction of AD. To produce cross sectional pictures of the bones, blood arteries, and soft tissues within the body, computer processing is used to integrate a succession of X-ray images recorded from different angle defined on your body. Plain X-rays do not give as much detail as CT scan imaging. An MRI scan employs a powerful magnet and radio waves to see at structures beneath the brain, according to the National Institute of Health. MRI scans are used by healthcare physicians to examine a variety of diseases, from damaged ligaments to cancer. To see and evaluate changes in cellular metabolism, PET is a functional imaging method, which thus employs radioactive additives termed as radiotracers.
Radiologists and clinicians, who are medical experts, analyze medical imaging data [6]. As a result of the probable tiredness of human specialists while evaluating images manually, a computer-assisted approach has proven to be beneficial for researchers as well as physicians. However, machine learning (ML) approaches are helping to improve the issue. Medical image analysis tasks need the use of ML to discover or learn useful features that characterize the correlations or patterns present in data. Since relevant or task-related characteristics are often created by human specialists on the basis of their domain expertise, it might be difficult for nonexperts to use ML techniques for their own research in the traditional manner. A number of projects are now working on the problem of learning sparse representations from training samples or pre-set dictionaries. Since then, there are attempts to generate sparse representations based on predefined dictionaries, which might be learned from training dataset. As a result of the concept of parsimony, sparse representation is used in many scientific fields. A sparsely inducing penalization and feature learning technique has been shown to be effective in medical image analysis when it comes to determining feature representation and selection [7]. Though data with a shallow architecture are still found to have meaningful patterns or regularities, techniques such as sparse representation or dictionary learning are still limited in their ability to represent them. Feature engineering has been incorporated into a learning phase in deep learning (DL), though, overcoming this issue [8]. Instead of manually extracting features, DL takes simply a collection of data with little preparation, if required, and then learns the valuable interpretations in an automatic method. Due to this shift in responsibility for feature extraction and selection, even non-experts in ML may now use DL effectively for their own research work, especially in the medical field for imaging analysis [9].
However, DL is afflicted by data dependency, one of the most signifi-cant problems. Comparatively to standard ML approaches, DL relies on a significant quantity of training data in order to discover hidden patterns in data. There is an interesting relationship between the size of the model in terms of the numbers of layers and the volume of information required.
Transfer learning (TL) eliminates the dependency of a huge amount of data requirement, which inspires us to utilize this to combat the problem of inadequate training data. This concept is driven by the idea that people may strategically utilize past knowledge to solve new problems or accomplish desirable results. The fundamental reasoning underlying this idea in ML was presented during a Neural Information Processing Systems (NIPS-95) symposium on “Learning to Learn,” which emphasized the need of lifelong ML approaches that store and apply previously acquired information [10]. TL approaches have recently shown results in a variety of practical applications. In Verma et al. [11], researchers utilized TL methods to transfer text data across domains. For fixing natural language processing issues, structural correspondence learning was presented by an author in Nalavade et al. [12]. Researchers employed several Convolutional Neural Network (CNN)-based TL models to detect AD [13].
This chapter presents the results of several TL techniques employed by previous researchers to identify AD.
1.2. Transfer Learning Techniques
TL is an ML research subject that focuses on retaining information received while addressing the problem and adapting it to some other but similar issue. As an instance, knowledge acquired when learning to identify trucks may be used while aiming to classify other four-wheeler vehicles. In CNN, this may be implemented in one of two ways: either the weights of all CNN layers are coupled to some other CNN layer with classification Layer output, as well as just utilizing “off-the-shelf CNN features,” whereby CNN serves like a generalized feature extractor to be analyzed later.
Several domains of knowledge engineering, such as classifier, prediction, and segmentation, have already experienced significant results using ML and data mining techniques [14]. Many ML...
| Erscheint lt. Verlag | 14.2.2023 |
|---|---|
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Informatik ► Theorie / Studium |
| Schlagworte | Artificial Intelligence • Big Data Analytics • Computer Science • Disease diagnosis • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • Healthcare • Image Processing • Informatik • intelligent transportation • internet of things • Künstliche Intelligenz • machine intelligence • machine learning • Medical Imaging • Plant disease • robot • Smart City • Smart Village • telemedicine |
| ISBN-13 | 9781119865490 / 9781119865490 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich