A Comprehensive Framework for Artificial Intelligence (eBook)
200 Seiten
Azhar Sario Hungary (Verlag)
978-3-384-72816-6 (ISBN)
Finally, a single book that connects every piece of the AI puzzle, from core principles to the state of the art.
This book offers a complete journey through the world of Artificial Intelligence. It starts with the absolute foundations, asking 'What is AI?' and exploring its philosophical roots. You will learn to think about intelligence through the core concept of agents acting in environments. From there, we build the first problem-solving tools using state-space search. The book covers both uninformed and informed search strategies, like A*. It then moves into multi-agent environments with adversarial search for games, using algorithms like Minimax. You'll learn how to represent problems declaratively with Constraint Satisfaction Problems. The curriculum then makes a major shift into knowledge representation, introducing formal logic as a tool for reasoning. After mastering propositional and first-order logic, the book confronts a crucial real-world challenge: uncertainty. It introduces probability theory, Bayesian networks, and models for reasoning over time, like Hidden Markov Models. The final parts dive into the machine learning revolution. You will get a thorough grounding in supervised learning, including regression, decision trees, and SVMs. You'll discover patterns in data with unsupervised learning techniques like k-Means and PCA. The journey culminates at the modern frontier, exploring the deep learning architectures that power today's AI. You'll understand Convolutional Neural Networks for vision, RNNs, and the powerful Transformer models for language.
What makes this book different is its synthesized and logical pathway. Instead of presenting a disconnected list of algorithms, it builds your understanding layer by layer, explaining why the field evolved as it did. It starts with a solid intellectual and computational bedrock, ensuring you grasp the core concepts of agency and systematic exploration before moving to more complex topics. It clearly explains the pivotal shifts in thinking-from the certainty of logic to the degrees of belief in probability, and from hand-crafted rules to learning directly from data. By tracing the progression from simple search to complex statistical methods, it reveals how the need to overcome computational complexity drove innovation. This narrative approach provides a deeper, more intuitive understanding of how these powerful ideas connect. The book doesn't just teach you how to build AI; it equips you to understand its impact and build it responsibly.
Disclaimer: This author has no affiliation with the board and it is independently produced under nominative fair use.
Part II: Knowledge, Reasoning, and Planning
Adversarial Search and Games
4.1 Game Theory and Optimal Decisions in Games
Imagine you're playing a simple game of checkers. Every move you make isn't just about advancing your own pieces; it’s a calculated response to your opponent's last move and a strategic setup for your next one. You're constantly thinking, "If I move here, what will they do? And what will that mean for me?" This intricate dance of action, reaction, and prediction is the very heart of game theory in the world of artificial intelligence. At its core, a game, from an AI's perspective, is an "adversarial search problem." This sounds complex, but it simply means we're dealing with a puzzle where you have an opponent who is actively trying to work against you.
To formally understand any game, we need to break it down into its fundamental building blocks. First, we have a set of states. A state is simply a snapshot of the game at any given moment. In Tic-Tac-Toe, a state is the specific arrangement of X's and O's on the 3x3 grid. The initial state is the empty board. Next, we have one or more players, or agents, who participate in the game. For now, let’s focus on two-player games, like chess or checkers, where the competition is direct.
Each player has a set of available actions. An action is a legal move a player can make from a given state. If it's your turn in Tic-Tac-Toe and there are five empty squares, you have five possible actions. The game's rules are captured in what we call a transition model. This model is straightforward: it tells us what new state the game will be in after a player takes a specific action. If you place an X in the top-left corner, the transition model moves the game to a new state reflecting that change.
But how does a game end? That’s determined by the terminal test. This test checks if a state is a final one—for instance, if one player has three in a row in Tic-Tac-Toe, or if the board is completely full, resulting in a draw. When the game reaches one of these terminal states, we need a way to score the outcome. This is where the utility function, also known as a payoff function, comes in. This function assigns a numerical value to the end of the game from the perspective of a player. For an AI playing Tic-Tac-Toe, a win might be worth +10 points, a loss -10, and a draw 0. This score is the ultimate goal; the AI wants to make moves that lead to the highest possible score.
This framework is particularly well-suited for a special category called zero-sum games. The name says it all: the total payoff for all players in the game sums to zero. In a two-player zero-sum game, one player's gain is exactly the other player's loss. If I win and get +10 points, you must lose and get -10 points. Chess, Go, and checkers are classic examples. The players have completely opposite goals. There's no room for cooperation; your victory is built on your opponent's defeat.
So, in this competitive world, how does an AI make the "optimal" decision? It follows a powerful and deeply pessimistic philosophy known as the Minimax principle. This principle is built on a crucial assumption: your opponent is just as smart as you are and will play perfectly to maximize their own utility. Since it's a zero-sum game, maximizing their utility is the same as minimizing yours. Therefore, the Minimax principle advises an agent to choose the move that leads to the best possible outcome in the worst-case scenario. You assume that after you make your brilliant move, your opponent will make the absolute best counter-move for them, which is the absolute worst one for you. By preparing for that worst-case counter, you make a decision that is robust, safe, and ultimately, optimal against a perfect adversary. You are maximizing your own minimum possible score.
4.2 The Minimax Algorithm
The Minimax principle gives us a powerful philosophy for playing games, but how does a computer actually put it into practice? The answer is the Minimax algorithm, a methodical procedure that turns this strategic idea into code. It's a way for an AI to explore the future of a game and work backward to find the very best move to make right now. The algorithm performs a complete, depth-first exploration of the game's possibilities, charting a course through what is known as the game tree.
Imagine the game tree as a massive map of every possible future. The root of the tree is the current state of the game board. From this root, branches sprout out, with each branch representing one possible legal move the current player can make. Each of these branches leads to a new node, which is the game state that results from that move. From each of those nodes, more branches sprout out representing the opponent's possible moves, and so on. This continues until we reach the "leaves" of the tree—the terminal states where the game is over.
The Minimax algorithm works in four logical steps:
Generate the Game Tree: The first step, conceptually, is to generate the entire game tree from the current position all the way down to every single possible terminal state. For a simple game like Tic-Tac-Toe, this is achievable. For a game like chess, this is a theoretical step, as the full tree is astronomically large.
Score the Terminal States: Once the tree is fully mapped out to its leaves, the algorithm applies the utility function to each terminal state. Every leaf node gets a score. For example, in a game, all the leaves that represent a win for our AI (the "MAX" player) get a positive score (like +10), all losses get a negative score (-10), and all draws get a 0.
Propagate Values Up the Tree: This is where the core logic of Minimax shines. The algorithm works its way back up from the leaves to the root, calculating a value for every single node in the tree. To do this, it treats player turns differently.
Nodes where it's the opponent's turn are called MIN nodes. The algorithm assumes the opponent will play perfectly to minimize our AI's score. Therefore, the value of a MIN node is the absolute minimum value of all its children nodes. If the opponent has three possible moves that lead to outcomes of +10, 0, and -10, the algorithm assumes they will choose the -10 outcome. So, the value of that MIN node becomes -10.
Nodes where it's our AI's turn are called MAX nodes. Our AI, of course, wants to maximize its score. So, the value of a MAX node is the absolute maximum value of all its children nodes. If our AI has three moves that lead to states valued at -10, 0, and +10, it will choose the +10 move, and the value of that MAX node becomes +10.
Choose the Best Move: This backward propagation continues all the way up the tree until we reach the top. At this point, the children of the root node (representing the immediate moves our AI can make) will each have a calculated minimax value. The AI simply looks at these values and chooses the move that leads to the child node with the highest score. That move is the certified optimal choice according to the Minimax principle.
This process introduces a powerful, if pessimistic, way of reasoning. The AI assumes the absolute worst at every step of its opponent's turn, ensuring that the final strategy is fortified against a flawless adversary. For a game like Tic-Tac-Toe, an AI using Minimax can explore the entire, relatively small game tree and play a perfect game, never losing.
However, the algorithm’s greatest strength is also its greatest weakness. The need to explore the entire game tree is computationally brutal. The number of states to explore grows exponentially with the number of moves. For chess, with an average of 35 possible moves from any position, the game tree becomes unimaginably vast after just a few turns. Exploring it completely would take longer than the age of the universe. This computational barrier is why the basic Minimax algorithm is infeasible for complex games, and it's the reason why more advanced techniques like alpha-beta pruning and heuristic evaluation functions were developed to make AI game-playing a practical reality.
4.3 Alpha-Beta Pruning: Giving Minimax a Brain
Imagine the standard Minimax algorithm as a diligent but slightly naive librarian. Tasked with finding the best possible move in a game, this librarian meticulously explores every single book on every single shelf in a vast library, even the sections it already knows are irrelevant. It works, but it's incredibly slow. It gets the job done by brute force, checking every possibility, no matter how nonsensical it might seem midway through the search. This is where the sheer computational explosion of games like chess becomes a paralyzing problem. The number of possible game states is astronomical, and our librarian would be lost for centuries.
Enter Alpha-Beta pruning, the clever, street-smart assistant who revolutionizes the process. This isn't just a minor optimization; it's a fundamental shift from blind searching to intelligent reasoning. It allows the algorithm to develop a sense of foresight and to talk to itself during the search, asking a critical question: "Is it even worth my time to keep looking down this path?" More often than not, the answer is a resounding "no."
To understand this beautiful piece of logic, we need to meet its two key players: Alpha (α) and Beta (β). Think of them as two dynamic...
| Erscheint lt. Verlag | 11.10.2025 |
|---|---|
| Sprache | englisch |
| Themenwelt | Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik |
| Schlagworte | Artificial Intelligence • Deep learning • Knowledge Representation • machine learning • Neural networks • Probabilistic Reasoning • Search algorithms |
| ISBN-10 | 3-384-72816-5 / 3384728165 |
| ISBN-13 | 978-3-384-72816-6 / 9783384728166 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Digital Rights Management: ohne DRM
Dieses eBook enthält kein DRM oder Kopierschutz. Eine Weitergabe an Dritte ist jedoch rechtlich nicht zulässig, weil Sie beim Kauf nur die Rechte an der persönlichen Nutzung erwerben.
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich