Zum Hauptinhalt springen
Nicht aus der Schweiz? Besuchen Sie lehmanns.de
WFMO -  Ihor Ivliev

WFMO (eBook)

Wetware's Foreclosing Myopic Optimization: Audit, Prognosis, and the Lesser Gamble

(Autor)

eBook Download: EPUB
2025 | 1. Auflage
290 Seiten
Publishdrive (Verlag)
978-0-00-097460-0 (ISBN)
Systemvoraussetzungen
11,00 inkl. MwSt
(CHF 10,75)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

The Central Argument


This work proceeds from a hard assumption: that most strategies proposed for systemic safety are not just incomplete - but structurally incoherent under pressure.


It argues that the game of software-level control is not merely difficult, but already forfeited. This is not declared - it is derived, via a three-front autopsy:


1. Institutional: The will to constrain has collapsed under the weight of myopic incentives.


2. Technical: Our tools are already being overrun by the systems they aim to contain.


3. Formal: The math never promised perfect control in the first place.


What remains is a single viable lever: the application of direct, material friction to the system's physical inputs (compute, capital, and energy) before those inputs optimize you out of the loop.



A Warning on Method


Do not grant trust to this document by default. Treat it as a hostile input stream until it earns your attention.


Large Language Models were actively used in deep internet research and the construction of this document - not as trusted collaborators, but as adversarial instruments under constant suspicion. For the author (a non-expert and non-native English speaker) this was a lesser evil. Their known failure modes (hallucination, mimicry, sycophancy, and others) were not treated as rare bugs, but as predictable pressures to resist. These pathologies made passive reliance a tactical impossibility. That pressure shaped the output - not despite their flaws, but through direct confrontation with them.


This is a personal analytical exercise, not authoritative guidance. The author claims no institutional or professional authority, and all conclusions should be independently verified. No claim here is final. Each is a starting point for your own critical verification.

Part 3: The Accelerant: A Three-Front War on Human Control


Subpart 3.1: The Epistemic Front: The Degradation of the Human Mind


The Observable Crisis: The Degradation of the Information Commons

AI’s role as an accelerant to systemic myopia is most demonstrable through its impact on our collective epistemology. As "plausibility engines", frontier models have collapsed the economics of information warfare, reducing the cost to generate convincing falsehoods by orders of magnitude. Formally, this degradation is a direct consequence of multiple, interacting Information Hazards (Bostrom, "Information Hazards", 2011). The creation of the models themselves constitutes a Development Hazard, unleashing a new class of powerful, dual-use cognitive tools. The very concepts they unlock function as Idea Hazards, providing novel pathways for misuse. Critically, the intense public and security focus on these models creates a potent Attention Hazard, signaling to adversaries that this is a uniquely fruitful domain for developing attack vectors. This is compounded by the fact that the underlying information is inherently dual-use: the same research that informs defenses simultaneously provides a near-perfect offensive playbook, creating a state of permanent informational asymmetry that favors the attacker (Lewis et al., "Information Hazards in Biotechnology", 2019). In stark contrast, the professional verification layer is not scaling to meet this threat - it is financially collapsing, with its capacity to debunk misinformation now in a state of structural contraction.

This fundamental and worsening economic imbalance is no longer a theoretical risk. It has produced a measurable degradation of the information commons, an erosion of shared reality now confirmed by empirical data.

The end state of this process was articulated with chilling clarity by Geoffrey Hinton, who warned of the creation of a world where many will "not be able to know what is true anymore".

The resulting epistemic failure is now quantitatively confirmed. In a methodologically transparent study commissioned by the media-ratings firm NewsGuard and conducted by the pollster YouGov in June 2025, a nationally representative sample of 1,000 American adults was surveyed online. Respondents were presented with three of the most viral, high-impact false narratives circulating that month and asked to assess their veracity as "True", "False", or "Not Sure", with a debriefing of the correct information provided afterward to prevent furthering misinformation.

The inaugural "Reality Gap Index", released on July 1, 2025, revealed the stark results of this polling: nearly half of the American public (49%) registered belief in at least one of the falsehoods. This headline figure, however, understates the full scope of the epistemic vulnerability. The study found that only 7% of respondents could correctly identify all three claims as false, while a vast majority, 74%, expressed uncertainty about the truth of at least one claim, revealing a state of profound public confusion.

The specific claims tested were not minor rumors but significant, politically-charged narratives selected for their potential for harm. An analysis of the individual responses reveals the scale of this confusion:

● Disinformation about protesters being armed with bricks in Los Angeles was believed by 23.48% of respondents, while only 33.20% correctly identified it as false.
● Fabricated claims of senators misspending over $800,000 in taxpayer money in Ukraine were believed by 26.88%, with only 16.53% correctly identifying the claim as false.
● The persistent, baseless narrative of a "white genocide" in South Africa was believed by 26.07% of respondents, while 40.47% correctly identified it as false.

The demonstrated success of these hoaxes in convincing a substantial portion of the public provides clear, quantitative evidence that the information ecosystem's defenses are being systematically breached.

The Mechanism of Cognitive Atrophy: From Neural Under-Engagement to Epistemic Erosion

The direct epistemic crisis is coupled with a well-documented mechanism of cognitive degradation, a phenomenon formally recognized as Cognitive Atrophy via AI-Mediated Offloading. This is not a speculative risk but an observed process grounded in established theory like the neuronal recycling hypothesis, which posits that the brain weakens neural circuits that are systematically bypassed. Researchers warn that this process can "deprive the user of routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared" (Shukla et al., De-skilling, Cognitive Offloading, and Misplaced Responsibilities: Potential Ironies of AI-Assisted Design, 2025). The entire mechanism is best understood through the framework of "cognitive debt": the deferral of present mental effort to an AI, which accumulates long-term costs in the form of diminished critical inquiry, impaired memory, and a reduced capacity for independent creative thought.

The neurophysiological basis for cognitive debt is directly measurable. A big 2025 EEG study titled, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" by Kosmyna et al., provides clear evidence of these effects. The study found that using an LLM for essay writing resulted in a significant reduction in brain connectivity compared to writing without assistance. Specifically, the study reports that in the alpha band the Brain‐only group exhibited 79 significant dDTF connections compared to 42 in the LLM‐assisted group - an absolute drop of 37 connections, which corresponds to a 46.8 % reduction in alpha‐band connectivity when offloading to the language model ((79 − 42) / 79 × 100 % ≈ 46.8 %). Likewise, in the theta band the Brain‐only condition showed 65 connections versus 29 for the LLM group, a reduction of 36 connections or 55.4 % ((65 − 29) / 65 × 100 % ≈ 55.4 %). These quantitative declines - nearly half of the neural pathways associated with creative semantic processing (alpha) and working‐memory/executive control (theta) being “powered down” - provide direct neurophysiological evidence of what the authors term “cognitive debt” arising from AI‐mediated offloading. Furthermore, the analysis showed up to a 55% reduction in the total dDTF (Dynamic Directed Transfer Function) connectivity metric for the LLM group compared to the unaided writing group, particularly in low-frequency networks responsible for semantic processing and monitoring. This "powering down" of the brain correlated with a profound memory deficit. While it is true that a majority of LLM users struggled with recall, the issue was even more severe than the initial text suggests. In the first session, 83.3% (15 out of 18) of participants in the LLM group "failed to provide a correct quotation" from their own essay. More strikingly, a full 100% of the participants in the LLM group were unable to produce a single correct quote when asked, a stark contrast to the Search Engine and Brain-only groups where only a small fraction had the same issue. Most critically, the study provided direct evidence for the accumulation of cognitive debt, where reliance on the LLM led to measurable cognitive consequences. The paper describes how habitual LLM users (participants who used the tool for three sessions), when later required to write without assistance, exhibited persistent neural "under-engagement". In this fourth session, these "LLM-to-Brain" participants displayed "weaker neural connectivity and under-engagement of alpha and beta networks". Their brains failed to develop the robust neural consolidation networks that were present in participants who had consistently practiced without AI assistance, pointing to a "Cognitive 'Deficiency'" that hinders the development of independent thinking skills.

The functional deficits observed in the Kosmyna et al. (2025) EEG study are consistent with a broader body of neuroimaging research on the effects of heavy technology use. Reviews synthesizing this research, such as Ali et al. (2024) in their work "Understanding Digital Dementia and Cognitive Impact in the Current Era of the Internet: A Review", highlight a potential pathway from the functional changes seen in EEG to long-term structural brain changes. The review notes that excessive digital technology use is correlated with significant alterations in brain structure. For example, studies have found that "overuse of social media can reduce the amount of gray matter in the “lingual gyrus, insula, anterior and posterior cingulate cortices, and amygdala". Beyond grey matter, the review also points to changes in the brain's white matter. It highlights a "direct link between extensive use of electronic media in early infancy and decreased white matter tract microstructural authenticity, particularly between the Wernicke and Broca regions of the brain", which are critical for language and comprehension abilities. These findings suggest that the cognitive debt observed in the EEG study may have a tangible neuroanatomical basis, where chronic offloading of cognitive tasks to digital devices could contribute to measurable changes in the very structure of the brain over time.

Recent empirical studies reveal that cognitive offloading onto AI has clear and quantifiable behavioral consequences. This is demonstrated across the distinct domains of critical thinking, creativity, and knowledge consolidation.

● Critical Thinking: In the peer-reviewed paper "AI Tools in Society: Impacts on...

Erscheint lt. Verlag 20.7.2025
Übersetzer GenAI LLM
Sprache englisch
Themenwelt Informatik Theorie / Studium Künstliche Intelligenz / Robotik
ISBN-10 0-00-097460-9 / 0000974609
ISBN-13 978-0-00-097460-0 / 9780000974600
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 2,4 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Die Grundlage der Digitalisierung

von Knut Hildebrand; Michael Mielke; Marcus Gebauer

eBook Download (2025)
Springer Fachmedien Wiesbaden (Verlag)
CHF 29,30
Die materielle Wahrheit hinter den neuen Datenimperien

von Kate Crawford

eBook Download (2024)
C.H.Beck (Verlag)
CHF 17,55