Digital Deception: Uncovering the Dark Side of AI in Social Networks (eBook)
320 Seiten
Bentham Science Publishers (Verlag)
979-8-89881-003-0 (ISBN)
Digital Deception: Uncovering the Dark Side of AI in Social Networks is a critical investigation into how artificial intelligence silently influences, manipulates, and, at times, undermines digital interactions across social platforms. Bridging disciplines such as computer science, sociology, and ethics, the book exposes how AI technologies contribute to misinformation, surveillance, identity manipulation, and psychological exploitation in the digital sphere.
With chapters on algorithmic bias, deep fakes, federated learning, and intrusion detection, the book reveals the hidden mechanisms that shape user behavior and societal discourse. It explores the ethical implications of AI-powered content curation, privacy violations, and the rise of automated cyberattacks-while proposing regulatory and technological countermeasures. Case studies and real-world examples illustrate the consequences of unchecked AI deployment and the erosion of trust in online spaces.
Key features:
Examines misinformation, digital surveillance, and algorithmic bias
Presents real-world case studies and AI behavior models
Highlights privacy concerns and ethical frameworks
Proposes AI-driven defenses and user empowerment strategies
Readership:
Researchers, students, technologists, policymakers, and activists seeking to understand and address the hidden risks posed by AI in our digitally connected world.
Digital Deception: Uncovering the Dark Side of AI in Social Networks is a critical investigation into how artificial intelligence silently influences, manipulates, and, at times, undermines digital interactions across social platforms. Bridging disciplines such as computer science, sociology, and ethics, the book exposes how AI technologies contribute to misinformation, surveillance, identity manipulation, and psychological exploitation in the digital sphere.
With chapters on algorithmic bias, deep fakes, federated learning, and intrusion detection, the book reveals the hidden mechanisms that shape user behavior and societal discourse. It explores the ethical implications of AI-powered content curation, privacy violations, and the rise of automated cyberattacks-while proposing regulatory and technological countermeasures. Case studies and real-world examples illustrate the consequences of unchecked AI deployment and the erosion of trust in online spaces.
Key features:
Examines misinformation, digital surveillance, and algorithmic bias
Presents real-world case studies and AI behavior models
Highlights privacy concerns and ethical frameworks
Proposes AI-driven defenses and user empowerment strategies
Readership:
Researchers, students, technologists, policymakers, and activists seeking to understand and address the hidden risks posed by AI in our digitally connected world.
AI Deception Detection: Behavior Model and Techniques
Manu1, *, Neha Varshney1
Abstract
Nowadays, it has become very common for the message to reach the receiver in the wrong way by spreading false information. In every field, passing wrong information by pretending it is right has become easy since everyone is using recent technologies. It is often seen that the person is not involved, but his credentials and personal details are shared indirectly. One thing used behind this technology is artificial intelligence, which is not based on emotions but can cause harm by using false methods. Experts caution against giving artificial intelligence (AI) executive control because its lack of emotions can do unthinkable harm. Due to the absence of understanding and an ethical compass, it can ultimately result in choices that have terrible emotional repercussions. The importance is to be given to the implications of AI and cover the vulnerabilities of social networks, possible ethical issues on privacy, and automated cyberattack case studies. The effects of a breach can reverberate for years as cybercriminals use the information they have stolen. The potential risk is only constrained by the creativity and technological abilities of malevolent persons. Sophisticated artificial intelligence (AI) systems are capable of operating deceit on their own to avoid human oversight, such as avoiding safety tests that regulators have mandated of them. However, despite topical developments, social media platform administration will continue to face a number of ethical difficulties.
* Corresponding author Manu: Computer Science and Engineering- Data Science, ABESIT, Ghaziabad, Uttar Pradesh 201001, India; E-mail: manu@abesit.edu.in
INTRODUCTION
“Artificial Intelligence could help make it easier to build chemical and biological weapons.” “In a worst-case scenario, society could lose all control over AI completely through the kind of AI sometimes referred to as super Intelligence.” All the above statements are said by Prime Minister, UK, Rishi
Sunak, before hosting the AI safety summit on 26 October 2023. Undoubtedly, AI is a governing field with its pros and cons everywhere. As AI is acknowledged by all of us in each domain, we need to understand its dark side with consequences.
The word deception has the meaning of demonstration of making somebody acknowledge as substantial what is invalid, but when the term is added with the word digital, it makes a broad sense with different domains. Also, it is the purposeful control of data in a mechanically intervened message to create a deception in the collector of the message. No matter what the various likely advantages, there are, without a doubt, many unfortunate results of simulated intelligence. Digital deception, a significant issue in our personal and professional lives, arises from the intersection of deception and communication technology. Digital deception is the deliberate act of misleading or tricking individuals or groups using digital technologies. This can manifest in various forms, such as spreading false information through social media, creating counterfeit websites, and manipulating digital content. Digital deception exploits the widespread use of digital technologies and the ease with which information can be disseminated and manipulated online, often for purposes such as spreading misinformation, influencing public opinion, or perpetrating fraud. Online gaming bullying, also known as cyberbullying, refers to harassment, intimidation, or aggressive behavior directed towards other players. In addition to delivering a peer-to-peer secure platform for information exchange and storage, distributed ledger technologies (DLTs) such as blockchain ensure the provenance and traceability of data by offering a transparent, immutable, and verifiable record of transactions.
When we talk about deception in the context of artificial intelligence, we usually mean the creation or dissemination of false or misleading information through the use of AI or machine learning techniques, frequently with the goal of tricking or controlling individuals or systems. AI deception can be used maliciously; AI can also be used to recognize and combat deception. Artificial intelligence might prompt gigantic dangers at the singular level, association level, and society level. Critically, these three angles are considered the main components of digitalization. We mainly discuss the dark sides of AI in various fields shown in Fig. (1).
Electronic Market
An electronic market alludes to a virtual exchanging climate that incorporates purchasers and vendors through unique web applications and also different applications in view of web correspondence innovation. Various advanced business organizations and retailers, for example, Amazon, influence artificial intelligence to upgrade deals, draw in and hold clients, and further develop productivity through improved promotion techniques and smoothed-out business processes [1].
Fig. (1))Challenges of digital deception.
The rising reception of another generative artificial intelligence innovation, ChatGPT, in the electronic market, is remarkable because of its groundbreaking effect on client communications and general business tasks [1]. While ChatGPT offers huge potential for the electronic market — like better client support, smoothed-out deals and exchanges, adaptability, cost-adequacy, and upper hand—it is fundamental to perceive that it can likewise present explicit antagonistic impacts.
In the first place, ChatGPT's capacity to produce text can be taken advantage of to spread disinformation or control economic situations [2]. Noxious actors can utilize the innovation to disperse deluding item portrayals, control stock costs, or misdirect clients. Second, while ChatGPT can give mechanized client assistance, it might miss the mark on compassion and nuanced understanding that human specialists have. This can prompt baffled clients and negative encounters, possibly influencing trust and steadfastness in the e-market. From the individual’s perspective, the detrimental effects of AI are mainly reflected in privacy concerns and content and product recommendations in electronic markets [3].
Cyberattack/Cybercrime
AI may be used in cyberattacks to mask harmful behavior, such as obfuscating malware or avoiding intrusion detection systems. Malicious actors are beginning to understand the potential applications of artificial intelligence. Not every AI tool will have the proper safeguards to prevent misuse, and malicious actors will constantly search for new ways to exploit vulnerabilities [4]. Cyberattacks are turning out to be progressively refined and designated.
Artificial intelligence apparatuses can be utilized to assist with mechanising the most common way of making malevolent messages as well as assisting with fitting them to explicit targets. A phishing assault, quite possibly of the most widely recognized structure, is a genuine illustration of how artificial intelligence devices can be utilized. A phishing assault is an endeavor to gain information, for example, usernames, passwords, and charge card subtleties, from clueless casualties [1]. AI-based intelligence calculations can improve the speed and accuracy of digital assaults, for example, DDoS assaults, secret phrase breaking, and malware spread, presenting critical dangers to people and associations.
Hacking, which is one of the most unethical activities now, can be easily manipulated by different AI tools, which include XXXGPT and WOLFGPT. Both of these tools utilize generative models to produce malevolent code, making them particularly risky for associations to safeguard against. Chatbot devices like ChatGPT and FraudGPT are turning out to be progressively famous among pernicious actors, as they present an amazing chance to initiate computerized cyber assaults [2]. At the same time, to recover from the harmful impacts of the deception model, we sometimes use ethical hacking tools like nmap, Metaspoilt Project, Maltego, and Nesus for forensics digital surveys and network/system vulnerabilities.
Deepfakes
Deepfakes are one more potential peril related to computer-based intelligence. Deepfakes are recordings or pictures controlled to show something that did not occur. With man-made intelligence, it is becoming more straightforward to make persuading deepfakes that can be utilized to spread falsehood or even shakedown people. Deepfake AI is the use of artificial intelligence algorithms, particularly deep learning techniques, to create realistic fake images, videos, or audio recordings. This technology uses neural networks and advanced machine learning models to analyze and manipulate...
| Erscheint lt. Verlag | 22.9.2025 |
|---|---|
| Sprache | englisch |
| Themenwelt | Informatik ► Netzwerke ► Sicherheit / Firewall |
| ISBN-13 | 979-8-89881-003-0 / 9798898810030 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Digital Rights Management: ohne DRM
Dieses eBook enthält kein DRM oder Kopierschutz. Eine Weitergabe an Dritte ist jedoch rechtlich nicht zulässig, weil Sie beim Kauf nur die Rechte an der persönlichen Nutzung erwerben.
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich