Thinking With Machines (eBook)
321 Seiten
Wiley (Verlag)
978-1-394-35906-6 (ISBN)
We are entering a brave new world, thanks to AI. We must shape this future to the advantage of everyone, and not just a select few.
Thinking with Machines: The Brave New World of AI tells the story of AI from its very beginnings through the eyes of Vasant Dhar, currently Robert A Miller Professor at the Stern School of Business, and Professor of Data Science at New York University. Professor Dhar lived through the invention of AI algorithms and their various permutations until today. He brought AI to Wall Street in the 90s and was the first to teach AI at NYU Stern. Through his story and the lessons that it reveals, we learn about AI's progress and reversals, its promises and dangers, and what we need to address before the machine gets away from us. Thinking with Machines is essential reading for AI enthusiasts and learners at all levels seeking knowledge on the greatest technological advancement of our time.
VASANT DHAR is Robert A. Miller Professor at the Stern School of Business and Professor of Data Science at New York University. He is also the host of the Brave New World podcast and a frequent contributor to scientific journals and mainstream media about artificial intelligence.
We are entering a brave new world, thanks to AI. We must shape this future to the advantage of everyone, and not just a select few. Thinking with Machines: The Brave New World of AI tells the story of AI from its very beginnings through the eyes of Vasant Dhar, currently Robert A Miller Professor at the Stern School of Business, and Professor of Data Science at New York University. Professor Dhar lived through the invention of AI algorithms and their various permutations until today. He brought AI to Wall Street in the 90s and was the first to teach AI at NYU Stern. Through his story and the lessons that it reveals, we learn about AI s progress and reversals, its promises and dangers, and what we need to address before the machine gets away from us. Thinking with Machines is essential reading for AI enthusiasts and learners at all levels seeking knowledge on the greatest technological advancement of our time.
PREFACE: GROUND ZERO: AI AT AN INFLECTION POINT
The governance of countries and businesses and the lives of individuals are changing at a blistering pace. They are being driven by two powerful forces around Artificial Intelligence. It is important that these forces be properly harnessed, otherwise things could go irretrievably wrong for most of us. There's a lot at stake for everyone.
Originally, I had intended to write a book about “AI for the Masses” following my 2018 TEDx talk, “When Should We Trust Machines?” I signed on with a top literary agent and wrote a 50‐page proposal. But with the advent of COVID, I delayed writing the book and instead started the Brave New World podcast, asking the question I felt was of pressing importance: “What is the world our future selves would like to inhabit?” To me, a lot of what had previously been viewed as science fiction was about to become science.
Over the next few years, I had incredibly enlightening conversations with some of the world's leading thinkers and leaders on Artificial Intelligence, social media, health, the environment, philosophy, law, economics, finance, arts, culture, education, and driverless cars. I came to recognize that the simultaneous velocity and recklessness of change that is driving how we live is orders of magnitude larger than it has ever been in human history. I believe that every citizen today must understand AI: how it works, its magic, risks, and how to thrive in the age of AI. It is a critical time for us to be clear about the pressing questions about AI and its governance and how we should be thinking about the answers. You see, I fear things could go irretrievably wrong if we don’t ask the right questions or make the wrong choices.
First, AI has gone beyond a tipping point. Although the field is over six decades old, the emergence of conversational AI changed everything. For the first time, anyone can talk to the AI machine about anything in their native language. Everyone can relate to AI as part of their everyday lives. After being a niche application area for 60 years, AI has become a general‐purpose technology.
Like previous general‐purpose technologies such as electricity and the Internet, AI is similarly poised to transform our lives. But unlike these previous technologies, whose workings and limitations we understand fully and can thus control, we don't understand many things about the inner workings of current‐day AI machines. We can't be sure they are telling us the truth or how risky it is to follow their recommendations. This presents us with a major dilemma: When should we trust the AI and when shouldn't we trust it? Equally importantly, how can we control such machines that we don't fully understand? Can the controls be built into the technology or specified in terms of new laws for governing AI?
The second seismic force at play as I write this chapter is a dramatic change in the global political landscape, driven in large part by the US. The new world order will be shaped by technological innovation. Those who control AI will shape how America and the rest of the world will govern AI – or be governed by it. Future economies will be transformed by AI, and future wars will be waged increasingly by autonomous AI machines.
How big of a deal is AI in geopolitics? As an indication of the importance of AI in the new Trump administration, the CEOs of America's largest AI companies were seated next to the president‐elect's family members at his inauguration, closer than any of his cabinet nominees. The symbolism is a clear signal of how directly the economic power of such individuals will translate into political power and the influence of public policy on AI.
To be clear, the wealthy have always exerted enormous power on political leaders. But the stakes this time around are orders of magnitude higher. Previous oligarchs typically controlled and lobbied for a single industry, like automobiles, oil, steel, or pharma. In contrast, because AI has become a general‐purpose technology that will pervade every industry, the people who control AI are likely to exert control across the entire industrial landscape, which will likely lead to even greater concentrations of wealth.
To put the difference in the scale of influence in perspective, the oil tycoon John D. Rockefeller was worth $1.4 billion when he died, and Andrew Carnegie was worth roughly $400 million at his peak. Adjusting for the average rate of inflation over the last 100 years, these would be in the low tens of billions today, which is dwarfed by the current fortunes of Elon Musk, Jeff Bezos, and dozens of emerging tycoons who are likely to be worth trillions by the time they die. This is more than the total national wealth of many countries on this planet.
What will these individuals do with that economic power? Perhaps more importantly, how will this wealth be used after their death?
WHO IS THE BOOK FOR?
I've written this book at several levels. First, for the general reader, to give them a strong understanding of AI: its history and how it works and issues that arise in its use. I endeavor to make everyone a savvy consumer of AI.
For those in the field who are already familiar with the pressing issues, I share a condensed version of what I have learned from my podcast conversations with leading thinkers about the critical questions surrounding AI and how to think about the answers. If you are already familiar with the questions, I provide a new perspective on how to frame and answer them.
For the AI researchers and practitioners who have entered the field in the last 10 to 15 years, I provide an annotated road map of how and why we got here, what we learned along the way, where AI is going, and its broader implications. To quote the Reggae legend Bob Marley, “If you know your history, then you will know where I'm coming from.”
But there is a final group in particular for whom the implications of the shift to the current paradigm will remain especially profound: policy makers, both in government and industry. They will need to incorporate the fundamental changes in how we work and think. For the first time, we are co‐habiting the planet with a highly intelligent alien species, albeit of our creation, which is becoming smarter than us in many ways. It is the first machine ever designed without a purpose, other than to be intelligent. Will we be able to craft policies and regulations that are guaranteed to be good for society – if we could even agree on what “good” means, or control it in the way that we desire?
In fact, already we are increasingly hearing about constraints on our real‐world activities that are attributed to non‐human gatekeepers: “the computer won't allow it.” Such gatekeeping by machines, which remove humans from the loop completely, pose considerable risks to a society that is becoming increasingly dependent on them.
The cybernetician Norbert Wiener warned us about this “control problem” shortly after the term “AI” was invented. Wiener said that if we design machines that we are not able to control, then we had better be sure that such machines are what we truly desire and not a “colorful imitation of it.” What is new now is that machines can do the things that were only science fiction at the time when Wiener issued his warning.
I start by describing the scientific history of AI as I have experienced it over the last 45 years as a “pracademic,” whose research in AI has been driven by practical problems. We are still in the early “wild west” days of AI as a general‐purpose technology; at this stage, there are few restrictions on its use. For example, there are no laws governing AI's use of data, and little in the way of policy or laws around data protection or human safety. Indeed, even though there are tight restrictions on how researchers are permitted to experiment on human subjects, there are no restrictions on how AI can experiment on humans. And it isn't yet a crime for someone to use AI to splice your face onto a pornographic video, nor is there any legal way to curb an algorithm that causes widespread depression among teenagers on social media, or one that impersonates friends, doctors, or financial advisors with malintent. We will need laws for protection from AI so that it doesn't run amok in a way that causes major damage to individuals or society. I provide a foundation to think about such laws and the broader policy questions that we face in the era of thinking machines.
Equally importantly, we need to think seriously about the rights of AI entities, as strange as that may seem. What happens when an AI becomes sophisticated enough to govern a business or manage a foundation? What happens when intelligent AI agents that act on our behalf become independent, mobile, and potentially sentient? Should they have the same rights, say, as corporate entities that can engage in contracts and sue parties for breach of contract? Could AI agents be sued? Could AI machines be used for conflict resolution? In short, how much agency will future AI agents have?
The last point has profound implications for whether humans will have the right to create machines that continue to exist and act on behalf of their creators, even after their creators are gone. Until now, this hasn't been a possibility worth serious consideration. The closest thing to exercising posthumous influence by the...
| Erscheint lt. Verlag | 27.10.2025 |
|---|---|
| Vorwort | Scott Galloway |
| Sprache | englisch |
| Themenwelt | Mathematik / Informatik ► Informatik ► Theorie / Studium |
| Schlagworte | ai audio • ai data governance • AI ethics • AI Future of Work • ai humans • ai low risk applications • AI Platforms • AI Policy • ai prediction • AI Privacy • ai reading • ai research • AI Responsibility • AI Technology • ai trust • ai vision |
| ISBN-10 | 1-394-35906-3 / 1394359063 |
| ISBN-13 | 978-1-394-35906-6 / 9781394359066 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich