GenAI on AWS (eBook)
588 Seiten
Wiley (Verlag)
978-1-394-28129-9 (ISBN)
The definitive guide to leveraging AWS for generative AI
GenAI on AWS: A Practical Approach to Building Generative AI Applications on AWS is an essential guide for anyone looking to dive into the world of generative AI with the power of Amazon Web Services (AWS). Crafted by a team of experienced cloud and software engineers, this book offers a direct path to developing innovative AI applications. It lays down a hands-on roadmap filled with actionable strategies, enabling you to write secure, efficient, and reliable generative AI applications utilizing the latest AI capabilities on AWS.
This comprehensive guide starts with the basics, making it accessible to both novices and seasoned professionals. You'll explore the history of artificial intelligence, understand the fundamentals of machine learning, and get acquainted with deep learning concepts. It also demonstrates how to harness AWS's extensive suite of generative AI tools effectively. Through practical examples and detailed explanations, the book empowers you to bring your generative AI projects to life on the AWS platform.
In the book, you'll:
- Gain invaluable insights from practicing cloud and software engineers on developing cutting-edge generative AI applications using AWS
- Discover beginner-friendly introductions to AI and machine learning, coupled with advanced techniques for leveraging AWS's AI tools
- Learn from a resource that's ideal for a broad audience, from technical professionals like cloud engineers and software developers to non-technical business leaders looking to innovate with AI
Whether you're a cloud engineer, software developer, business leader, or simply an AI enthusiast, Gen AI on AWS is your gateway to mastering generative AI development on AWS. Seize this opportunity for an enduring competitive advantage in the rapidly evolving field of AI. Embark on your journey to building practical, impactful AI applications by grabbing a copy today.
OLIVIER BERGERET is a technical leader at Amazon Web Services (AWS), working on database and analytics services. He has over 25 years of experience in data engineering and analytics. Since joining AWS in 2015, he's supported the launch of most of AWS AI services including Amazon SageMaker and AWS DeepRacer. He is a regular speaker and presenter at various data, AI and cloud events such as AWS re:Invent, AWS Summits and third-party conferences.
ASIF ABBASI is a Principal Solutions Architect at AWS and has spent the last 20 years working in various roles with focus around Data Analytics, AI/ML, DWH Strategic and Technical Implementations, J2EE Enterprise applications design/development and Project Management. Asif is an Amazon Certified SA, Hortonworks Certified Hadoop professional and Administrator, Certified Spark Developer, SAS Certified Predictive Modeler, along with being a Sun Certified Enterprise Architect and a Teradata Certified Master.
JOEL FARVAULT is a Principal Solutions Architect Analytics at Amazon Web Services. He has 25 years' experience working on enterprise architecture, data strategy, and analytics, mainly in the financial services industry. Joel has led data transformation projects on fraud analytics, business intelligence, and data governance. He is also a lecturer on Data Analytics at IA School, at Neoma Business School and at Ecole Superieure de Genie Informatique (ESGI). Joel holds several associate and specialty certifications on AWS.
The definitive guide to leveraging AWS for generative AI GenAI on AWS: A Practical Approach to Building Generative AI Applications on AWS is an essential guide for anyone looking to dive into the world of generative AI with the power of Amazon Web Services (AWS). Crafted by a team of experienced cloud and software engineers, this book offers a direct path to developing innovative AI applications. It lays down a hands-on roadmap filled with actionable strategies, enabling you to write secure, efficient, and reliable generative AI applications utilizing the latest AI capabilities on AWS. This comprehensive guide starts with the basics, making it accessible to both novices and seasoned professionals. You'll explore the history of artificial intelligence, understand the fundamentals of machine learning, and get acquainted with deep learning concepts. It also demonstrates how to harness AWS's extensive suite of generative AI tools effectively. Through practical examples and detailed explanations, the book empowers you to bring your generative AI projects to life on the AWS platform. In the book, you'll: Gain invaluable insights from practicing cloud and software engineers on developing cutting-edge generative AI applications using AWS Discover beginner-friendly introductions to AI and machine learning, coupled with advanced techniques for leveraging AWS's AI tools Learn from a resource that's ideal for a broad audience, from technical professionals like cloud engineers and software developers to non-technical business leaders looking to innovate with AI Whether you're a cloud engineer, software developer, business leader, or simply an AI enthusiast, Gen AI on AWS is your gateway to mastering generative AI development on AWS. Seize this opportunity for an enduring competitive advantage in the rapidly evolving field of AI. Embark on your journey to building practical, impactful AI applications by grabbing a copy today.
Chapter 1
A Brief History of AI
Artificial Intelligence: “The conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
– First definition coming from the Dartmouth Summer Research Project proposal in 1956
Defining Artificial Intelligence (AI) is not easy as AI is a young discipline. AI is undoubtedly a bold, exciting new world, where the lines between humans and machines blur, leading us to question the very nature of intelligence itself. In addition, AI is recognized as a collection of scientific disciplines including mathematical logic, statistics, probabilities, computational neurobiology, and computer science that aims to perform tasks commonly associated with the human cognitive abilities such as the ability to reason, discover meaning, generalize, or learn from past experiences.
Interestingly, AI founders weren’t just computer scientists. They were philosophers, mathematicians, neuroscientists, logicians, and economists. Shaping the course of AI required them to integrate a wide range of problem-solving techniques. These tools spanned from formal logic and statistical models to artificial neural networks and even operations research. This multidisciplinary approach became the key to solving intricate problems that AI posed.
The Precursors of the Mechanical or “Formal” Reasoning
One of these precursors was the French philosopher and theologist René Descartes (1596–1650), who wrote in 1637 his Discourse on the Method,1 one of the most influential works in the history of modern philosophy, and important to the development of natural science. He discussed in Part V the conditions required for an animal or a machine to demonstrate an intelligent being. This was one of the earliest examples of philosophical discussion about artificial intelligence. He envisioned later in his Meditations on First Philosophy2 (1639) the possibility of having machines being composed like humans but with no mind.
In 1666, the German polymath Gottfried Wilhelm Leibniz (1646–1716) published a work entitled On the Combinatorial Art3 in which he expressed his strong belief that all human reasoning can be represented mathematically and reduced to a calculation. To support this vision, he conceptualized and described in his writings a Calculus ratiocinator: a theoretical universal logical calculation method to make these calculations feasible and a Characteristica Universalis: a universal and formal language to express mathematical, scientific, and metaphysical concepts.
At the same time, Blaise Pascal (1623–1662) built in 1641 one of the first working calculating machines called the “Pascaline,” shown in Figure 1.1, which could perform additions and subtractions. Inspired by this work, Leibniz built his “Stepped reckoner” (1694), as shown in Figure 1.2, a more sophisticated mechanical calculator that could not only add and subtract but also multiply and take the square root of a number.
Figure 1-1: Drawing of the top view of the Pascaline and overview of its mechanism.
1779, Oeuvres de Blaise Pascal, Chez Detune, La Haye, Public Domain.
Figure 1-2: Drawing of the Stepped reckoner.
Hermann Julius Meyer /Wikimedia Commons/Public domain.
After the initial developments in mechanical calculation, further advancements were made in the early nineteenth century. In 1822, English mathematician Charles Babbage (1791–1871) designed the Difference Engine, an automatic mechanical calculator intended to tabulate polynomial functions. The Difference Engine was conceived as a room-sized machine, but it was never constructed in Babbage’s lifetime.
Building on the foundations laid by Leibniz, George Boole published in 1854 The Laws of Thought4 presenting the concept that logical reasoning could be expressed mathematically through a system of equations. Now known as Boolean algebra, Boole’s breakthrough established the basis for computer programming languages. Additionally, in 1879 German mathematician Gottlob Frege (1848–1925) put forth his Begriffsschrift,5 which established a formal system for logic and mathematics. The pioneering work of Boole and Frege on formal logic laid essential groundwork that enabled subsequent developments in computation and computer science.
The Digital Computer Era
In 1936, mathematician Alan Turing (Figure 1.3) published his landmark paper “On Computable Numbers,”6 conceptually outlining a hypothetical universal machine capable of computing any solvable mathematical problem encoded symbolically. This theoretical Turing machine established a framework for designing computers using mathematical logic and introduced the foundational notion of an algorithm for programming sequences.
Figure 1-3: Alan Turing.
GL Archive / Alamy Stock Photo
Around the same time, Claude Shannon’s 1937 master’s thesis, A Symbolic Analysis of Relay and Switching Circuits,7 demonstrated Boolean algebra’s applicability for optimizing electric relay arrangements: the core components of electromechanical telephone routing systems. Shannon thus paved the way for applying logical algebra to circuit design.
Concurrently in 1941, German engineer Konrad Zuse developed the world’s first programmable, fully functional computer, the Z3. Built from 2,400 electromechanical relays, the Z3 embodied the theoretical computer models proposed by Turing and leveraged Boolean logic per Shannon’s insights. Zuse’s pioneering creation was destroyed during World War II.
In the aftermath of World War II, mathematician John von Neumann made vital contributions to emerging computer science. He consulted on the ENIAC (Figure 1.4), the pioneering programmable electronic digital computer built for the US Army during the war. In 1945, von Neumann authored a hugely influential report on the proposed EDVAC computer, outlining the “stored-program concept”: separating a computer’s task-specific programming from its general-purpose hardware that sequentially executes instructions. This conceptual distinction enabled the adoption of software programs without reconfiguring physical components, thereby allowing a single machine to readily perform different sequences of operations. Von Neumann’s architectural vision profoundly shaped modern computing as the standard von Neumann architecture.
Figure 1-4: ENIAC in building 328 at the Ballistic Research Laboratory (BRL).
Ballistic Research Laboratory, 1947–1955, US Army.
Following the first functioning computer constructions, Alan Turing reflected on the capabilities afforded by these theoretically “universal machines.” In a 1948 report, he argued that one such general-purpose device should be sufficient to carry out any computable task, rather than needing infinite specialized machines. Developing this thread further in his landmark 1951 paper “Computing Machinery and Intelligence,”8 Turing considered whether machines might mimic human capacities. To examine this, he proposed what later became known as the Turing test, an “imitation game” evaluating whether people could distinguish a concealed computer from a human respondent based solely on typed conversation.
In the conventional form of the Turing test, there are three participants: a human interrogator, interacting through written questions and answers with two players: a computer and a person. The interrogator needs to understand which one is the human solely based on the text-based responses to his or her inquiries. Removing other perceptual cues forces the player to rely entirely on the linguistic content and reasoning within the typed replies when attempting to distinguish human from machine. This restriction highlights Turing’s interest in assessing intelligence manifested in communication; if responses are sufficiently comparable between candidates, it suggests the computer can display capacities approaching human-level understanding and dialogue, at least conversationally. Thus, passing this verbal imitation game constituted Turing’s proposed measure for demonstrated machine intelligence.
Cybernetics and the Beginning of the Robotic Era
The term robot entered the lexicon in 1920 with Czech writer Karel Capek’s play R.U.R. (Rossum’s Universal Robots), which featured artificial workers created to serve humans. The specific concept of robotics as a field of study then emerged in science fiction over the following decades. Notably, the 1942 short story “Runaround” by Isaac Asimov introduced his influential Three Laws of Robotics, ethical constraints programmed into the fictional robots to govern their behavior:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These simple yet profound guidelines shaped many subsequent...
| Erscheint lt. Verlag | 19.3.2025 |
|---|---|
| Reihe/Serie | Tech Today |
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Informatik ► Theorie / Studium |
| Schlagworte | ai on amazon web services • Ai on aws • amazon cloud ai • Anthropic • anthropic ai • artificial intelligence on amazon web services • artificial intelligence on aws • aws ai development • aws ai programming • aws cloud ai • cloud ai • developing ai apps on aws |
| ISBN-10 | 1-394-28129-3 / 1394281293 |
| ISBN-13 | 978-1-394-28129-9 / 9781394281299 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich