Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de

Artificial Intelligence Applications and Reconfigurable Architectures (eBook)

eBook Download: EPUB
2023
John Wiley & Sons (Verlag)
9781119857877 (ISBN)

Lese- und Medienproben

Artificial Intelligence Applications and Reconfigurable Architectures -
Systemvoraussetzungen
173,99 inkl. MwSt
(CHF 169,95)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
ARTIFICIAL INTELLIGENCE APPLICATIONS and RECONFIGURABLE ARCHITECTURES

The primary goal of this book is to present the design, implementation, and performance issues of AI applications and the suitability of the FPGA platform.

This book covers the features of modern Field Programmable Gate Arrays (FPGA) devices, design techniques, and successful implementations pertaining to AI applications. It describes various hardware options available for AI applications, key advantages of FPGAs, and contemporary FPGA ICs with software support. The focus is on exploiting parallelism offered by FPGA to meet heavy computation requirements of AI as complete hardware implementation or customized hardware accelerators. This is a comprehensive textbook on the subject covering a broad array of topics like technological platforms for the implementation of AI, capabilities of FPGA, suppliers' software tools and hardware boards, and discussion of implementations done by researchers to encourage the AI community to use and experiment with FPGA.

Readers will benefit from reading this book because

  • It serves all levels of students and researcher's as it deals with the basics and minute details of Ecosystem Development Requirements for Intelligent applications with reconfigurable architectures whereas current competitors' books are more suitable for understanding only reconfigurable architectures.
  • It focuses on all aspects of machine learning accelerators for the design and development of intelligent applications and not on a single perspective such as only on reconfigurable architectures for IoT applications.
  • It is the best solution for researchers to understand how to design and develop various AI, deep learning, and machine learning applications on the FPGA platform.
  • It is the best solution for all types of learners to get complete knowledge of why reconfigurable architectures are important for implementing AI-ML applications with heavy computations.

Audience

Researchers, industrial experts, scientists, and postgraduate students who are working in the fields of computer engineering, electronics, and electrical engineering, especially those specializing in VLSI and embedded systems, FPGA, artificial intelligence, Internet of Things, and related multidisciplinary projects.

Anuradha Thakare, PhD, is a Dean of International Relations and Professor in the Department of Computer Engineering at Pimpri Chinchwad College of Engineering, Pune, India. She has more than 22 years of experience in academics and research and has published more than 80 research articles in SCI journals as well several books.

Sheetal Bhandari, PhD, received her degree in the area of reconfigurable computing. She is a postgraduate in electronics engineering from the University of Pune with a specialization in digital systems. She is working as a professor in the Department of Electronics and Telecommunication Engineering and Dean of Academics at Pimpri Chinchwad College of Engineering. Her research area concerns reconfigurable computing and embedded system design around FPGA HW-SW Co-Design.


ARTIFICIAL INTELLIGENCE APPLICATIONS and RECONFIGURABLE ARCHITECTURES The primary goal of this book is to present the design, implementation, and performance issues of AI applications and the suitability of the FPGA platform. This book covers the features of modern Field Programmable Gate Arrays (FPGA) devices, design techniques, and successful implementations pertaining to AI applications. It describes various hardware options available for AI applications, key advantages of FPGAs, and contemporary FPGA ICs with software support. The focus is on exploiting parallelism offered by FPGA to meet heavy computation requirements of AI as complete hardware implementation or customized hardware accelerators. This is a comprehensive textbook on the subject covering a broad array of topics like technological platforms for the implementation of AI, capabilities of FPGA, suppliers software tools and hardware boards, and discussion of implementations done by researchers to encourage the AI community to use and experiment with FPGA. Readers will benefit from reading this book because It serves all levels of students and researcher s as it deals with the basics and minute details of Ecosystem Development Requirements for Intelligent applications with reconfigurable architectures whereas current competitors books are more suitable for understanding only reconfigurable architectures. It focuses on all aspects of machine learning accelerators for the design and development of intelligent applications and not on a single perspective such as only on reconfigurable architectures for IoT applications. It is the best solution for researchers to understand how to design and develop various AI, deep learning, and machine learning applications on the FPGA platform. It is the best solution for all types of learners to get complete knowledge of why reconfigurable architectures are important for implementing AI-ML applications with heavy computations. Audience Researchers, industrial experts, scientists, and postgraduate students who are working in the fields of computer engineering, electronics, and electrical engineering, especially those specializing in VLSI and embedded systems, FPGA, artificial intelligence, Internet of Things, and related multidisciplinary projects.

Anuradha Thakare, PhD, is a Dean of International Relations and Professor in the Department of Computer Engineering at Pimpri Chinchwad College of Engineering, Pune, India. She has more than 22 years of experience in academics and research and has published more than 80 research articles in SCI journals as well several books. Sheetal Bhandari, PhD, received her degree in the area of reconfigurable computing. She is a postgraduate in electronics engineering from the University of Pune with a specialization in digital systems. She is working as a professor in the Department of Electronics and Telecommunication Engineering and Dean of Academics at Pimpri Chinchwad College of Engineering. Her research area concerns reconfigurable computing and embedded system design around FPGA HW-SW Co-Design.

1
Strategic Infrastructural Developments to Reinforce Reconfigurable Computing for Indigenous AI Applications


Deepti Khurge

Pimpri Chinchwad College of Engineering, Pune, India

Abstract


Artificial intelligence (AI) methodologies have the potential to reform many aspects of human life. The capabilities of AI are continuously evolving so as its enterprise adoption. Globally governments and industries are actively conceiving where and how to leverage AI. Machine learning (ML) and AI are evolving at a faster rate than silicon can be developed. To take advantage of AI to its potential, the appropriate AI infrastructure must be strategically planned. AI solutions will require appropriate hardware, software, and scalable processing models. The ecosystem of AI business applications, hence, can be seen as a whole.

The need for enterprises to comprehend the correct technology and infrastructure required to implement AI-powered solutions is growing by the day. Significant AI infrastructures are AI networking infrastructure, workloads, data preparation, data management and governance training, and Internet of Things (IoT). If the potential in the labor force, academic institutions, and governance standing is identified and leveraged effectively, commercial strategies can lead to an AI breakthrough.

Keywords: Artificial intelligence, reconfigurable computing, GPU, FPGA, ASIC, hardware accelerator

1.1 Introduction


Recently, reconfigurable computing has made significant advancements in the acceleration of AI applications. Reconfigurable computing is a computing architecture that focuses on the high-performance flexibility of hardware and software components. After production, they are reprogrammed to specific applications based on their functionality requirements. It is a significant research field in computer architectures and software systems. By putting the computationally intensive parts of an algorithm onto reconfigurable hardware, many algorithms may be considerably accelerated. Artificial intelligence algorithms and application that has traditionally suffered from lack of a clear methodology to implement. Researchers have used reconfigurable computing as one means of accelerating computationally intense and parallel algorithms. There is a need to explore the recent improvements in the tools and methodologies used in reconfigurable computing which strengthen its applicability towards accelerating AI methodologies [1].

Contemporary AI applications, such as finance, healthcare, military, etc., are designed on the grounds of complex artificial neural networks (ANN), having complex computation including huge data, constraints and recurring layer to layer communication [12]. With AI technology growing cutting-edge significantly, AI algorithms are still developing, and one ANN algorithm can only acclimatize to one application. Hence, an ideal AI hardware must be able to adapt to changing and developing algorithm, support diverse ANN based on necessities, and switch between ANN flexibly. Microchips built on reconfigurable computing may be able to resourcefully support user specific computational pattern, computing architecture, and memory hierarchy by allowing runtime configuration in said areas by efficiently supporting diverse NNs with high output computations and communications [9, 12].

1.2 Infrastructural Requirements for AI


As AI progresses from experimentation to adoption, it will necessitate a huge investment in computer resources and infrastructure. Due to technological advancements, complex and resource-intensive, the system costs will rise. As AI’s necessity for large volumes of data increase, so data has to be on cloud so, predominantly hybrid cloud solutions will be required, to create concrete infrastructural foundation. These solutions will ensure that the needs of businesses and workloads will be sufficed and provide support to the increasing demands required to sustain AI, and ensure to be at the appropriate cost. Organizations require adequate performance computing resources, which including CPUs and GPUs, to effectively exploit the opportunities posed by AI. Basic AI operations can be handled in a CPU-based environment, but deep learning requires many big data sets and the use of scalable machine learning algorithms. CPU-based processing may not be adequate for this. Especially compared to regular CPUs, GPUs can expedite AI and ML operations with great amounts. As per computing capacity and density demand for high-performance networks and storage will also expand. The following criteria are specially given attention to setup an ecosystem for AL-based infrastructural development [4, 16].

a. Storage capacity or volume

As the volume of data grows, it is important for any infrastructure to scale storage. Many parameters influence how much storage an application uses, including how much AI it will use and if it will need to make real-time predictions. For example, a healthcare application that employs AI algorithms to make real-time decisions on disease prediction may require all-flash storage, VLSI applications may need faster but much larger storage will suffice. system design must account for the volume of data generated by AI applications. When AI applications are exposed to more data, they make better predictions [4, 6, 7].

b. Networking infrastructure

AI-based systems and algorithm implemented on devices or on cloud are required to deal with huge data. Many of infrastructure with large computer networks are responsible for real time data transmission. AI efforts to satisfy these demands nut networking infrastructure will keep on rising high. Such system needs high bandwidth and very low latency.

c. Security

Application such as military, health care needs AI to manage sensitive data. Such data may be a patient records, financial information, and personal data, defence related data. Such data that get hampered will be dangerous for any organization. Having data attacks or data breach can lead to pronounced consequences in organizations. Comprehensive security strategy should be adopted such AI infrastructure.

d. Cost-effective solutions

As AI systems become more complicated, they become more expensive to run, thus maximizing the performance of infrastructure. In such conditions it is critical to keeping costs these system under control. Expecting continued growth in the number of firms employing AI in the next years, putting more strain on network, server, and storage infrastructures to support this technology cost effective solutions are desired

e. High computing capacity

Organizations require sufficient performance computing resources, such as CPUs and GPUs, to properly utilize the opportunities given by AI. Basic AI workloads can be handled in a CPU-based environment, but deep learning requires many big data sets and the use of scalable neural network techniques. CPU-based computation may not be sufficient for this. Demand for high-performance networks and storage will increase, as will computing capacity and density [6, 7].

Hence, while delivering the high performance eco system for AI-based systems the organizations should adopt the strategic developments methods to foster the needs of the infrastructure [3]. Gradually starting from robust security areas, the large storage backups, high performing computational models and cost effective solutions to go hand in hand to develop state of art technological solutions.

1.3 Categories in AI Hardware


Next important developmental phase in adopting AI solutions is strong hardware support. The hardware should be technologically accommodative to existing infrastructure as well as capable of establishing heuristic methodologies in terms of adaption [5, 6].

The hardware used for AI today mainly consists of one or more of the following:

  • CPU — Central Processing Units
  • GPU — Graphics Processing Units
  • FPGA — Field Programmable Gate Arrays
  • ASIC — Application Specific Integrated Circuits

a. CPU

The CPU is the standard processor used in many devices. Compared to FPGAs and GPUs, the architecture of CPUs has a limited number of cores optimized for sequential serial processing. Arm® processors can be an exception to this because of their robust implementation of Single Instruction Multiple Data (SIMD) architecture, which allows for simultaneous operation on multiple data points, but their performance is still not comparable to GPUs or FPGAs.

The limited number of cores diminishes the effectiveness of a CPU processor to process the large amounts of data in parallel needed to properly run an AI algorithm. The architecture of FPGAs and GPUs is designed with the intensive parallel processing capabilities required for handling multiple tasks quickly and simultaneously. FPGA and GPU processors can execute an AI algorithm much more quickly than a CPU. This means that an AI application or neural network will learn and react several times faster on a FPGA or GPU compared to a CPU.

CPUs do offer some initial pricing advantages. When training small neural networks with a limited dataset, a CPU can be used, but the trade-off will be time. The CPU-based system will run much more slowly than an FPGA or GPU-based system. Another benefit of the CPU-based application will be power consumption. Compared to a GPU configuration, the CPU will deliver better energy efficiency.

b. GPUs

Graphic...

Erscheint lt. Verlag 14.2.2023
Sprache englisch
Themenwelt Mathematik / Informatik Informatik Theorie / Studium
Schlagworte AI • AI/DL/ML applications using FPGA • AI on FPGA • Artificial Intelligence • Computer Science • DL on FPGA • Electrical & Electronics Engineering • Elektrotechnik u. Elektronik • FPGA based Hardware Accelerator for AI/ML/DL • HW-SW Co-Design for AI/ML/DL • Informatik • Intelligente Systeme u. Agenten • Intelligent Systems & Agents • internet of things • IOT • KI • Künstliche Intelligenz • ML on FPGA • Reconfigurable Architectures for AI • Reconfigurable Architectures for DL • Reconfigurable Architectures for ML • Reconfigurable Computing for AI/ML/DL
ISBN-13 9781119857877 / 9781119857877
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Design scalable and high-performance Java applications with Spring

von Wanderson Xesquevixos

eBook Download (2025)
Packt Publishing (Verlag)
CHF 31,65