Artificial Intelligence (eBook)
194 Seiten
Azhar Sario Hungary (Verlag)
978-3-384-75559-9 (ISBN)
Discover the Future of AI with This Comprehensive Guide!
Hey there, if you're curious about artificial intelligence and want a book that breaks it down without overwhelming you, this is it. 'Artificial Intelligence: Principles, Techniques, and Frontiers (2025 Edition)' covers everything from the basics to cutting-edge stuff. It starts with symbolic AI foundations. You'll learn about intelligent agents. History and philosophy of AI are explained. Ethics and safety come right up front. The job market for AI skills is analyzed. Problem-solving uses search algorithms. Uninformed searches like BFS and DFS are detailed. Informed searches include A* and heuristics. Local search covers hill-climbing and genetic algorithms. Constraint satisfaction problems are defined. Backtracking and heuristics solve CSPs. Adversarial search uses minimax. Alpha-beta pruning optimizes it. Logic starts with propositional. First-order logic handles complex knowledge. Ontologies and knowledge graphs represent data. Neuro-symbolic AI bridges old and new. Uncertainty reasoning uses probability. Bayesian networks model beliefs. Exact and approximate inference compute answers. Time-based reasoning includes Markov chains. Hidden Markov models handle sequences. MDPs formalize decisions. Value and policy iteration solve them. Machine learning pipelines preprocess data. Paradigms like supervised and unsupervised are categorized. Evaluation metrics avoid pitfalls. Bias-variance tradeoff is balanced with regularization. Regression uses linear and logistic. k-NN classifies lazily. Naive Bayes handles text. Decision trees split data. Random forests ensemble them. Boosting with XGBoost improves accuracy. SVMs find margins. Unsupervised learning clusters with k-means. Hierarchical and DBSCAN group data. PCA reduces dimensions. Deep learning builds neural nets. Multilayer perceptrons learn patterns. Backpropagation trains them. Optimizers like Adam speed it up. Dropout prevents overfitting. CNNs process images. Architectures like AlexNet classify visuals. Object detection uses YOLO. RNNs and LSTMs sequence data. Transformers revolutionize NLP. Attention mechanisms focus key parts. BERT and GPT handle language. Reinforcement learning explores Q-learning. Deep Q-networks play games. Policy gradients optimize actions. Generative models include VAEs. GANs create fake data. Diffusion models generate images. Advanced topics cover RAG and MoE. Embodied AI agents interact physically. Alignment ensures safe AI.
What sets this book apart is its 2025 focus-other texts feel outdated, skipping real-world updates like the latest AI winters, governance laws, or breakthroughs in neuro-symbolic systems. It weaves ethics into every chapter, not just an add-on, and ties concepts to job skills like prompt engineering or MLOps that employers crave. Unlike rigid academics, it uses conversational case studies, from Klarna's agents to Netflix modeling, making complex ideas stick. Competitors miss this blend of theory, practice, and forward-thinking frontiers like embodied AI or pluralistic alignment, giving you a competitive edge in a fast-evolving field.
Dive deeper into chapters on deep learning frontiers, where transformers power chatbots and generative AI creates art. Real examples from 2025 reports ground the theory. Whether you're a student, pro, or hobbyist, this guide equips you to build, critique, and innovate in AI. It's packed with visuals, code tips, and exercises for hands-on learning.
This book is independently produced and has no affiliation with any board or organization. It is created under nominative fair use for educational purposes.
PART 1: Foundations of Symbolic AI
Foundations of Artificial Intelligence
Subtopic 1.1: History, Philosophy, and Evolution of AI
The entire discipline of Artificial Intelligence is built on one deceptively simple question: "Can machines think?" This question, posed by the field's godfather, Alan Turing, in his 1950 paper, "Computing Machinery and Intelligence," is the philosophical seed from which everything else has grown. Turing wasn't just an engineer; he was a philosopher. He sidestepped the sticky, unanswerable debate over what "thinking" or "intelligence" truly is by proposing a practical experiment: the "Turing Test," or Imitation Game. If a machine could converse with a human so convincingly that the human couldn't tell it was a machine, he argued, then for all practical purposes, it was intelligent.
This bold idea lit a fire. In 1956, a small group of researchers gathered for the "Dartmouth Summer Research Project on Artificial Intelligence." This workshop was a moment of incredible, almost naive, optimism. Researchers like Herbert Simon and Allen Newell, who already had a working program called the "Logic Theorist," believed that a machine capable of human-level intelligence was only a few years away.
Their approach, which dominated AI for decades, is known as symbolic reasoning, or the "neats." They believed intelligence was like logic or algebra—a set of formal rules that could be programmed into a machine. If you could just write enough rules and facts (e.g., "All men are mortal," "Socrates is a man"), the machine could deduce new truths ("Socrates is mortal"). This led to impressive early demos but soon hit a massive wall. The real world, it turns out, isn't a clean set of rules. It's messy, ambiguous, and requires common sense—something impossible to program with simple "if-then" statements.
This failure to meet sky-high expectations led to the first "AI Winter" in the 1970s. Funding evaporated. The field became a punchline. The promises of the 1950s looked like hubris.
The field saw a brief, commercial revival in the 1980s with the rise of expert systems. These were essentially massive decision-tree programs, championed again by the "neats." A company might build an expert system for medical diagnosis, feeding it thousands of rules from top doctors. These systems were commercially successful for a time but were incredibly "brittle." They were expensive to build, impossible to update, and would fail spectacularly if a problem fell even slightly outside their programmed rules. This led to the second, less severe "AI Winter."
All the while, a rival approach was bubbling in the background: the connectionist or "scruffy" approach. These researchers, inspired by the structure of the human brain, believed intelligence wasn't about rules but about connections. They built "neural networks"—mathematical models that could learn from data. This approach was computationally slow, and for decades, it was a niche academic pursuit.
Then, two things changed: the internet created a limitless ocean of data (text, images, video), and the rise of video games gave us a new kind of hardware: the Graphics Processing Unit (GPU). GPUs, designed for rendering 3D graphics, were perfect for the specific kind of math neural networks needed. This combination of "big data" and "big compute" ignited the deep learning revolution. This "scruffy," statistical approach won. It's the engine behind modern AI, from your phone's photo tagging to the massive generative AI models of today.
This brings us back to Turing. As the Stanford AI Index Report 2025 notes, modern Large Language Models (LLMs) can now pass the Turing Test with ease. But a new, deeper philosophical question has emerged: Does passing the test mean the AI "understands" what it's saying, or is it just a "stochastic parrot," a incredibly sophisticated predictive text engine? We have, in a sense, solved Turing's engineering problem, only to be confronted by the very philosophical one he tried to avoid. Understanding this history—this cycle of hype, winter, and breakthrough—is the most critical tool for preventing the next AI winter.
Subtopic 1.2: The Intelligent Agent Framework
To build an AI, you first need a shared language to describe what it is you're even trying to build. In modern AI, the single most important abstraction—the "mental model" for the entire field—is the concept of the rational agent.
An "agent" is simply anything that can be viewed as perceiving its environment and then acting upon that environment. It's a continuous, three-step cycle: Perceive -> Think -> Act.
A thermostat perceives the temperature (e.g., 67°F). It thinks (its "goal state" is 70°F). It acts (turn on the furnace).
A self-driving car perceives (cameras, LiDAR see a red light). It thinks (red light means stop; check for crossing pedestrians). It acts (apply the brakes).
A stock-trading bot perceives (market data, news feeds). It thinks (run predictive model). It acts (execute a "buy" order).
This framework is powerful because it takes the "magic" out of AI and turns it into a concrete engineering problem. The primary tool used to define this problem is the PEAS framework (Performance Measure, Environment, Actuators, Sensors). Before writing a line of code, an engineer uses PEAS to create a formal "job description" for the agent.
Let's design an AI spam filter using PEAS:
Performance Measure: What defines success? Minimizing false positives (not blocking legitimate email) is more important than minimizing false negatives (letting some spam through). The agent must also be fast.
Environment: The user's email inbox, the email servers, the internet.
Actuators: What "limbs" does the agent have to act with? It can move an email to the "Spam" folder, or flag an email.
Sensors: How does the agent "see" the world? It can read the email's sender, header, body text, and embedded links.
Once the problem is defined, we can choose the right kind of agent "brain" for the job. There is a clear taxonomy of agents, from simplest to most complex:
Simple Reflex Agents: These are pure "if-then" machines. They act only on the current percept. The thermostat is a classic example. If it's cold, turn on heat. It has no memory.
Model-Based Reflex Agents: These agents maintain an "internal state," or a "model" of the world. A self-driving car's agent needs this. It perceives a car in front, but its model tells it that even if the car is in a blind spot for a second, it's still there.
Goal-Based Agents: These agents have an explicit goal and plan to achieve it. Your GPS is a goal-based agent. Its goal is "Arrive at 123 Main Street." It uses a search algorithm to think ahead and find the best path.
Utility-Based Agents: This is the most advanced type. It's not just about achieving a goal; it's about achieving it in the best possible way. It acts to maximize an expected "utility." Your GPS is actually a utility-based agent: its goal isn't just "get there," but to maximize the utility of "fastest time" or "least fuel." It balances trade-offs.
By 2025, this "agent" concept is no longer a textbook abstraction. It is a primary product category. The Klarna customer service assistant is a perfect real-world, utility-based agent. As of early 2025, it handles two-thirds of all support chats, has resolved 2.3 million conversations, and does the work of 700 full-time human agents. Its "utility" is to resolve customer issues accurately, politely, and efficiently. Similarly, DoorDash's voice agent for its "Dashers" (delivery drivers) is a model-based agent whose utility is speed, achieving a latency under 2.5 seconds to reduce driver frustration. The skill this subtopic teaches is "agent-oriented design"—the ability to look at a complex business problem and frame it as a formal PEAS problem.
Subtopic 1.3: Foundations of AI Ethics and Safety
For decades, ethics in engineering was a single lecture, an afterthought, or an appendix. The prevailing attitude was "move fast and break things." In the 2025 AI curriculum, that model is dead. Ethics and safety are now Pillar 1, Chapter 1. This shift isn't driven by philosophy but by harsh, commercial reality: an AI that is not ethical or safe is a broken product. An AI that is biased, private, or dangerous is a failure of engineering.
The foundation of this topic rests on the five pillars of trustworthy AI:
Fairness (Bias): AI models are trained on data. If that data reflects historical, human biases, the AI will not only learn that bias but amplify it. The most cited case study is predictive policing. In cities like Chicago, software was trained on historical arrest data. But this data was already biased—certain racial and socioeconomic groups were over-policed. The AI learned this bias and then recommended sending more officers to those same neighborhoods, creating a toxic, automated feedback loop. The AI was 100% accurate at learning its biased training data. This real-world harm led to cities like Santa Cruz and Oakland banning the technology.
Robustness (Safety): This is the "security" pillar. It's...
| Erscheint lt. Verlag | 15.11.2025 |
|---|---|
| Sprache | englisch |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| Schlagworte | AI ethics • Artificial Intelligence • Deep learning • machine learning • Neural networks • Probabilistic Reasoning • symbolic AI |
| ISBN-10 | 3-384-75559-6 / 3384755596 |
| ISBN-13 | 978-3-384-75559-9 / 9783384755599 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Digital Rights Management: ohne DRM
Dieses eBook enthält kein DRM oder Kopierschutz. Eine Weitergabe an Dritte ist jedoch rechtlich nicht zulässig, weil Sie beim Kauf nur die Rechte an der persönlichen Nutzung erwerben.
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich