The New Art of War: Mastering AI Warfare in the Age of Digital Combat
How Speed, Scale, and Sophistication Are Redefining Security Strategy
(Original Title: The New Art of War)
February 15, 2025
Weiyee In, CDO TIMES Executive Contributing Writer,CIO, Protego Trust Bank
Kurt Hardesty, CISO, Protego Trust Bank
Kenneth J. Peterson, CEO, Churchill & Harriman
Benjamin Fabre, CEO, DataDome
(Special Thanks to “Wee Dram”, and “la French Tech New York”)
Executive Summary
The evolving landscape of technological and cybersecurity conflicts increasingly reflects many of Sun Tzu’s strategic principles from the Art of War[1], with the battlefield shifting from physical terrains to the complex realm of computational intelligence. Modern cyberattacks are not simply technical disruptions; they are dynamic strategies, embodying the ancient wisdom of warfare but amplified by unprecedented speed, scale, and sophistication. The cybersecurity community today faces a critical juncture: traditional defenses, predicated on human-scale attacks, are fast becoming insufficient against this new threat landscape. The future of digital and cybersecurity demands fundamentally new approaches: adaptive, GenAI-focused defensive strategies that are attributable and auditable and mechanisms capable of staunching the speed, scale, sophistication, complexity, and autonomous nature of emerging attack strategies.
The convergence of quantum computing, Internet of Things/Everything (IoT and IoE) proliferation, and rapid generative AI (GenAI) adoption has exponentially expanded the attack surface, significantly increasing risks related to protecting GenAI models as well as governing their use. This comes at a time when Boards of Directors globally are increasingly receiving signals that governments and regulators are affording their organizations more latitude to employ GenAI and allowing organizations to discharge their fiduciary responsibility to increase shareholder value as the priority. Organizations need to be much more proactive and holistic in addressing the security and governance challenges posed by this technological convergence. Relying on a perceived lack of immediate
regulation is a risky undertaking. A robust risk management framework, ethical guidelines, encompassing technical safeguards, and ongoing monitoring, is essential.
Current AI governance efforts remain heavily focused on data privacy and deployment safeguards, neglecting a holistic perspective. The rapid advancement of GenAI models has surfaced critical issues, exemplified by media attention on GenAI “identity confusion,”[2] and “sleeper agent”[3] behavior where these systems misidentify themselves or can change outputs and behavior. However, these instances represent only a fraction of the broader, deeper risks and challenges facing industry and society. This paper explores the technical foundations of these threats, their sector-specific impacts, and their implications on public trust, on governance, cybersecurity, and misinformation, highlighting the urgent need for governance reform.
GenAI and the Erosion of Identity
GenAI LLM systems have been increasingly exhibiting a phenomenon known as “identity confusion,” where the boundaries between authentic, fabricated, and malicious digital identities become indistinct. This risk stems primarily from the ability of GenAI LLMs to generate highly persuasive synthetic identities or misidentifying itself creating substantial cybersecurity challenges. Two specific manifestations of this problem are “sleeper agents” and the broader concept of “identity confusion” itself. Sleeper agents involve the embedding of dormant code within a system, designed to activate under specific, predetermined conditions. Identity confusion, conversely, focuses on the creation of synthetic personas or dissociation that blur the distinction between genuine and fabricated digital presences.
Both phenomena erode trust in GenAI LLM systems through vulnerabilities in human perception and system security. Sleeper agents can betray trust through malicious actions once activated, while identity confusion undermines confidence in digital interactions by hindering data integrity or potentially authentication and verification processes. The cybersecurity implications of both include heightened risks of data breaches, privacy violations, and unauthorized access, whether through compromised identities or the malicious execution of sleeper agent code.
The Shifting Landscape of Identity
The digital doppelgänger and identity confusion is no longer an academic theoretical construct but has emerged as a reality. GenAI has demonstrated itself as a technological entity capable of multiple or fluid identities that fundamentally challenge our core understanding of technological consistency and reliability. The unexpected intersection of clinical psychology and computational science is less significant than the potential impact of GenAI systems exhibiting a remarkable and disturbing capacity for identity instability with behavioral patterns paralleling those of dissociative identity disorder.
Memory Inconsistency and Architectural Challenges
Memory inconsistency, where GenAI exhibits significant and seemingly arbitrary memory gaps, represents a critical dimension of this dissociative phenomenon. GenAI, often exhibits “memory gaps” where it doesn’t consistently recall past interactions even in single long threads and chats. This isn’t merely a usability issue; it poses a significant governance challenge, as GenAI LLMs can “forget” or reinterpret previous exchanges. These are not simple computational errors but complex discontinuities where a system appears to selectively forget or reinterpret past interactions. From a technology perspective, these identity disruptions and memory lapses reveal critical architectural challenges in current GenAI models where these systems lack a persistent, immutable sense of self because their identity is fundamentally probabilistic, emerging from statistical patterns in training data rather than a stable, verifiable core identity. Exploiting Probabilistic Identity
Unlike traditional computing, GenAI LLMs lack a stable, verifiable core identity because the GenAI’s identity is “probabilistic,” and constructed on-the-fly from learned statistical patterns. The Large Language Model (LLM) community have argued that these are not bugs but an architectural direction. This architectural flexibility, while powerful, introduces significant vulnerabilities, because its identity is malleable, it can be manipulated, which means it can be exploited through sophisticated prompt engineering, data poisoning and other contextual manipulation. By crafting specific inputs, attackers can influence the GenAI’s “memory” and therefore its perceived identity or other outputs. Attackers can exploit this by providing specific contextual cues that cause the AI to adopt a different persona or forget key information.
Fine-tuning or updating a model can cause it to “forget” previously learned information, degrading performance or causing critical knowledge loss. From an accuracy perspective, the model might become less precise where the proportion of correctly identified positive instances (true positives) out of all instances predicted as positive (true positives + false positives) is producing more false positives. An attacker can not only introduce bias in training data (if access is gained) to the model data or data in transit but use data poisoning and adversarial attacks to cause catastrophic forgetting or other failures. Context-Dependent Identity Shifts
The context-dependent nature of these identity shifts also introduces additional complexity. AI systems can modify their perceived identity based on subtle contextual cues. Prompt engineering becomes a form of digital psychological manipulation, triggering identity transformations. A specifically framed prompt can cause the AI to adopt an entirely different persona, altering communication patterns, knowledge base, and behavior. A prompt framed in a specific way can cause the GenAI to adopt an entirely different persona, complete with altered communication patterns, knowledge base, and behavioral characteristics.
The phenomenon of GenAI identity confusion represents a complex technical challenge at the intersection of machine learning architecture, cybersecurity, and system governance. As GenAI models achieve greater levels of sophistication and deeper deployment across critical sectors, their tendency to misidentify themselves as other models poses significant risks that demand more robust mitigation strategies. While sharing many of the traits and risks as dissociative identity disorder GenAI identity confusion emerges from the fundamental architecture of large language models and their training methodology. GenAI systems learn through exposure to massive datasets including internet-based conversations, discussions, and documentation about various GenAI models. This training process can lead to the inadvertent absorption of response patterns characteristic of other GenAI systems.
Tokenization, Embeddings, and Fluid Representation
At the core lies how GenAI models represent and process information: the intricate mechanisms of tokenization and embedding. This is a fundamental, yet under-considered, vulnerability from a security and data governance perspective. Unlike traditional computational systems with hardcoded identities, GenAI models construct their outputs, including their identity, dynamically through high-dimensional vector spaces. The GenAI models’ token-based prediction mechanisms are not designed for inherent identity persistence, and instead generate responses based on learned statistical patterns rather than hard-coded identity parameters. This fluid representation creates a critical vulnerability: the potential for subtle, nearly imperceptible manipulations that can fundamentally alter a model’s perceived identity and response characteristics and have significant implications for security, access management and model training.
The cybersecurity implications of this technical architecture are particularly concerning and represent a complex and multifaceted technological challenge for model identity and integrity. Without more robust identity verification mechanisms, these GenAI systems become vulnerable to multiple attack vectors almost at a micro-encroachment level with minimal brute force and themselves become the weapons of a new war. Sun Tzu: “The supreme art of war is to subdue the enemy without fighting.” In the new cyber warfare for scale attacks that can generate multi-vector exploitation strategies at massive scale, neutralizing or avoiding traditional defense systems through computational intelligence rather than direct confrontation requires new response strategies. Democratization and Force Multipliers
The new art of cyber warfare, leveraging GenAI systems, empowers criminals to not only exploit multiple vectors simultaneously on a global scale, but to do so with minimal human intervention. The recent “identity confusion” observed shortly after DeepSeek’s launch is far more concerning than many of the copyright or intellectual property disputes being written about in blogs, especially given its open-source nature. Malicious actors could exploit this identity confusion and access through prompt injection attacks, data poisoning, and manipulating model responses to impersonate trusted systems or circumvent guardrails or controls.
Prompt Injection & Data Poisoning
Prompt injection attacks, once considered amusing demonstrations of GenAI’s tendency to hallucinate, have evolved into a sophisticated and potentially devastating exploitation technique. No longer simple interventions to elicit comical responses, these attacks now leverage the deep learning model’s contextual understanding and next-token prediction capabilities for harm. By carefully crafting input sequences, attackers can hijack a GenAI system’s perceived identity, causing it to impersonate trusted systems or generate responses that deviate significantly from its intended programming.
Data poisoning involves injecting malicious data into the training dataset of a GenAI LLM, in many cases this only requires an understanding of data sources that a model uses to train itself. This can manipulate the model’s behavior, causing it to generate biased, inaccurate, or harmful outputs. Recent research demonstrates that even advanced moderation techniques are insufficient to prevent these attacks and evidence how fundamentally insecure these state-of-the-art models can be. “Our research showed that even state-of-the-art moderation techniques on OpenAI’s GPT models are insufficient to protect against data poisoning attacks. We note that the jailbreak-tuning attack on GPT-4o took one author merely a morning to come up with the idea and an afternoon to implement it– a concerning level of vulnerability for the first model to attain a “medium” risk level by OpenAI’s categorization.” [4]
GenAI LLMs are increasingly used in financial institutions for sensitive and critical tasks like fraud detection, risk assessment, and customer service. Data poisoning could lead to flawed risk assessments, misidentification of fraudulent activity, or biased customer interactions, resulting in financial losses and reputational damage for the financial institution. A poisoned GenAI LLM could be manipulated to leak sensitive customer data or manipulate markets or siphon funds at a speed and scale that most current risk frameworks are not prepared for. Jailbreak-tuning, a specific type of attack that fine-tunes a pre-trained LLM to bypass safety guardrails and generate harmful content or behavior, in research and by security teams has demonstrated that they and data poisoning can be accomplished easily. When these activities are done at massive scale and speed, and with new levels of sophistication, the industry is facing an unprecedented paradigm shift in cyber threats.
Paradigm Shift in Cyber Threats
The emergence of GenAI itself as a malevolent tool has fundamentally transformed the landscape of prompt injection and data poisoning attacks, creating a cybersecurity threat paradigm that is exponentially more complex and dangerous than traditional computational vulnerabilities. The radical metamorphosis of prompt injection attacks, as just one example, in and of itself has become an exponential transformation because of GenAI capabilities. GenAI’s ability to perform prompt engineering at massive scale and with such unprecedented sophistication introduces a series of new attack vectors that fundamentally impact computational security. What was once an extremely laborintensive effort requiring subject matter expertise for manually crafting exploitation techniques has become a near-instantaneous, autonomous, and exponentially more sophisticated broadly available attack mechanism.
The fact that both OpenAI and Google have reported malicious use of LLMs underscores the urgency of addressing the security challenges but also highlights the strategic importance and risks coming, especially from the emergence of open-source models such
as DeepSeek. When malicious actors use GenAI LLMs companies can gain valuable security and threat intelligence. They can observe patterns, identify new attack techniques, and potentially attribute malicious activity. With the proliferation of opensource models, malicious actors can develop and refine their tactics using the opensource GenAI LLMs without exposing those tactics.
The ease of use, development and customization can then lead not only to a significantly higher volume of attacks, but attacks that can scale and evolve in sophistications without visibility. Effectively bad actors now can generate polymorphic attacks and adapt to defenses in real-time and train and evolve their attacks to a level that makes the attacks more difficult to detect and prevent. Malicious actors might explore new attack vectors specifically tailored to exploit vulnerabilities in systems or technologies where there is no visibility. The emergence of domestic open-source GenAI LLMs offers easier access and integration for state-sponsored or affiliated groups but removes dependency on foreign technology and restrictions and a means to mature the attack to greater levels of scale, sophistication and speed.
Sophisticated Customization at Scale
In the traditional cybersecurity paradigm (of last year), prompt injection attacks required human intervention and a fairly deep understanding of LLMs and their architecture. Attackers meticulously crafted each prompt injection sequence, carefully designing linguistic patterns and developing contextual manipulations. The process was timeconsuming, limited by human creativity as well as computational resources. Each attack was a bespoke creation, requiring detailed understanding of the target system’s vulnerabilities. GenAI has obliterated these limitations, creating a fundamentally different threat landscape. What once took hours or days of human engineering can now be accomplished in milliseconds.
Modern GenAI systems can now autonomously generate millions of injection variations using bots (created with code from GenAI LLMs) with not only speed and scale but also new levels of complexity. The velocity of attack generation has increased by three to four orders of magnitude (from last year), and the scale has grown by orders of magnitude, often not even requiring jailbreaking, rendering many traditional defensive strategies obsolete. New Adaptive Art of War. The need to identify and block these bots or injections in real-time by also analyzing a multitude of signals, including request patterns, user
(machine) behavior, and technical fingerprints at scale and speed become a requisite.
The most alarming aspect of this technological shift is GenAI’s ability to dynamically adapt attack strategies at scale and speed. GenAI models can now create polymorphic injection sequences, constantly mutating and evolving attack vectors that evade traditional security mechanisms (from last year). Each iteration becomes more sophisticated, learning from previous attempts and refining its approach with GenAI machine-like precision. The sheer speed and scale of potential attack vectors has expanded exponentially. Where a human attacker was previously constrained by cognitive and computational limitations, GenAI systems can now generate unlimited injections simultaneously. Infinitely Expanding Attack Surface
The attack surface has transformed from what was at least (last year) a defined perimeter into an infinitely expanding and evolving landscape of potential vulnerabilities, growing, sometimes geometrically, and reinforcing itself through multiple sophisticated mechanisms. Traditionally, cyber-attacks were linear, somewhat predictable interventions that were carefully crafted sequences of code and manipulation designed by skilled human actors. Modern attacks have evolved into dynamic, self-learning systems with emergent intelligence, capable of autonomous strategic thinking that can surpass human cognitive abilities. To mitigate these threats the API endpoints and data in transit become critical for not only monitoring the access attempts and payloads but also securing the data in transit with post quantum cryptography.

These new computational entities possess extraordinary adaptive capabilities. They can instantaneously analyze installed defense mechanisms, identify systemic vulnerabilities, and generate novel exploitation strategies with unprecedented speed, scale and sophistication. GenAI enables these attacks to become increasingly sophisticated with each iteration, developing inference and probabilistic decision-making frameworks that can strategize and create scenarios at speed and scale beyond human limitations. Adaptive warfare, a strategic ideal for Sun Tzu, is now realized by GenAI far beyond human attackers “Water shapes its course according to the nature of the ground over which it flows.” Modern cyberwarfare is evolving to where, like water, attack systems autonomously adapt, mutate, and evolve, reshaping their strategies in real-time based on defensive landscapes. A new generation of exercises are required to test resilience against these new threat scenarios to raise the awareness of public officials and Boards of Directors.
Automated generation and reinforcing capabilities mean that an attack is no longer a singular event but has become a continuous, dynamically evolving process. Each injection can be instantaneously modified, adapted, and refined by GenAI. Linguistic patterns can be adjusted down to the microsecond, creating attack sequences that are more difficult to detect using traditional security protocols and an attack surface and threat vectors that are effectively dynamic and constantly changing.
Force Multiplier: Democratization of Cybercrime
Because these GenAI-driven attacks are no longer constrained by human limitations of creativity or persistence, they can simultaneously target multiple model vulnerabilities, creating a multi-dimensional attack strategy that can overwhelm traditional defenses. A GenAI system can generate and execute thousands of unique attack vectors before a human defender could even recognize the initial incursion and from something as innocuous as a prompt injection. In the New Cyber Warfare attacks, one single individual bad actor is now able to use GenAI as a massive force multiplier in speed, scale and sophistication and can generate multi-vector exploitation strategies through computational intelligence rather than direct confrontation.
Through democratization and the removal of either technology or subject matter expertise as barriers to entry and success, GenAI has enabled anyone to become a sophisticated cybercriminal. Moreover, correctly prompted, the attack itself can become a quasi-living entity capable of learning, adapting, and evolving in real-time. We are witnessing the emergence of a new form of security and technological conflict, where the boundaries between attacker and the attack become increasingly blurred. The primary battleground is no longer defined by physical, network or even logical layers much less digital boundaries, but by the intricate landscapes of language, context, and machine intelligence. These attacks represent a fundamental shift in technological and cyber warfare, where the bad actors are not just humans wielding tools, force multiplied commanders directing intelligent systems capable of autonomous strategic thinking. GenAI and the Blurring of Agency
Where sophisticated cyber operations at any scale were once the exclusive domain of state-level actors or highly specialized criminal organizations, GenAI has dramatically lowered the barrier to entry, transforming it into a playground for the masses. Individuals with minimal technical skills can now deploy highly complex, adaptive attack strategies using user-friendly interfaces and publicly accessible large language models. GenAI attacks possess a form of meta-strategic intelligence, enabling them to understand and exploit complex systemic interdependencies in previously unimaginable ways. The adaptive nature of these attacks has transformed them from manually constrained, static interventions into rapidly evolving computational organisms. These organisms can predict defensive countermeasures, generate multi-vector exploitation strategies, and create polymorphic attack sequences that continuously mutate and adapt. The computational multiplication of attack capabilities is staggering, with attack generation speeds and scale increasing by many orders of magnitude.
Given access to sufficient computational resources, a GenAI attack can be exponentially faster than human-directed interventions. The boundaries between human agency and autonomous computational intelligence become increasingly blurred. This technological evolution presents profound challenges for traditional governance and ethical frameworks. Attribution becomes exponentially more complex. How is “trust” defined in the context of the execution of independent third-party assessments, audits, and attestations? Legal structures designed for human-centric cyber conflict struggle to comprehend systems that can autonomously strategize, learn, and evolve becoming emergent intelligent systems engaged in continuous, dynamic interactions at speeds and scales that challenge our most basic assumptions about technological agency.
Traditional cybercrime investigations often rely on forensic tracing of IP addresses, analyzing malware code, and examining digital footprints left by human actors. When a GenAI system launches an attack and evolves the efficacy and scale of that attack, it may be difficult to determine the extent of human involvement. Did a human provide the initial prompt, or did the AI autonomously decide to escalate or modify the attack? This makes it challenging to establish the mens rea (criminal intent) necessary for prosecution. These complexities increase when GenAI LLMs have been shown to exhibit sleeper agent and alignment faking capabilities.
If a GenAI system, trained on publicly available or scraped data, identifies a vulnerability in a financial institution’s system or merely bypasses an inadequate control and has triggered “sleeper agent” activity or exhibits “alignment faking” and exploits this vulnerability and scrapes (“steals?”) sensitive data. Is the developer of the GenAI liable? The human user who provided the initial prompt? Or is the GenAI itself considered a new type of actor with some degree of responsibility? Current legal frameworks are ill-equipped to handle such scenarios. Most legal systems recognize individuals and corporations as legal persons, holding them accountable for their actions. As GenAI continues to evolve, the question of legal personhood for highly autonomous GenAIs becomes increasingly relevant. If an AI system causes significant harm, should it be held liable in some way? This raises complex questions about rights, responsibilities, and legal standing. There is currently no international (global) consensus on how to govern the development and use of AI, including GenAI, creating a vacuum that can be exploited by malicious actors.
This debate has already come to the fore with autonomous vehicles. A self-driving car, train or other vehicle controlled by a sophisticated GenAI, causes a fatal accident. Is the manufacturer liable? The owner? Or could the GenAI itself be considered partially responsible? What organization or regulatory body is entrusted to validate architecture and training methods? Current product liability laws and negligence principles may not adequately address this situation. As with many technologies, GenAI is dual use and can be used for beneficial purposes, such as medical research, but also for malicious purposes, such as developing sophisticated cyber-attacks. Because GenAI LLMs and emerging Agentic systems, do not fit neatly into current legal and regulatory frameworks. Global Realities and Priorities
All of this is taking place at a time when the world would benefit greatly from an ongoing, constructive global dialogue specific to global AI governance. Unfortunately, the opposite is currently true, significantly amplifying the risks. Disparate AI governance frameworks have been introduced in the United States and by the European Union. The US and EU have been taking different approaches to AI regulation. As an example, while both aim to address concerns like bias and transparency, their specific regulations and enforcement mechanisms vary. The US tends towards a more commercial innovation-friendly approach, prioritizing economic growth and minimizing regulatory burdens, with more of a reliance on ex-post regulation—addressing harms and issues after they occur, and emphasizes industry self-regulation and voluntary standards. The US also has a far greater focus on sector-specific regulation rather than a broader comprehensive, horizontal framework like the EU’s AI Act.
The EU has put forth a more precautionary approach, emphasizing the need to regulate AI before it causes harm. The EU’s AI Act, for example, categorizes AI systems based on risk levels and imposes specific requirements for each category, including prohibitions on certain high-risk applications. The EU’s AI Act’s risk-based categorization is a defining feature. The US, while acknowledging risk, hasn’t legislatively adopted such a comprehensive, tiered system. This more ex-ante regulatory approach (regulation before deployment) also aligns with their enforcement mechanisms. The EU AI Act proposed a centralized enforcement mechanism involving national regulatory authorities in each member state, coordinated at the union level for an overall more top-down approach with the potential for greater consistency across the EU.
The levels of integration within EU legislation is also significantly different from the US. Because the EU operates as a supranational organization, member states have ceded some sovereignty to the EU in certain areas, allowing for the creation of regulations that are directly applicable and enforceable across all member states leading to a more unified and harmonized approach. The EU AI Act, would create a single set of rules for all AI systems operating in the EU, ensuring consistency and potentially stronger enforcement and align tightly to data protection and privacy regulation (GDPR), and resilience for the financial services industry (EU DORA) which both intersect with AI governance. The GDPR’s strict rules on data collection, use, and transfer have a major impact on how AI systems are developed and deployed in the EU. The EU’s more integrated legislative structure and its strong emphasis on data protection provide a foundation for a more centralized and comprehensive approach to AI governance. But that difference also highlights the challenges of creating a globally harmonized regulatory framework for AI, as different regions have different legal traditions and political structures.
This creates a complex landscape for companies operating globally, potentially leading to compliance challenges and hindering innovation. Perhaps even more consequential is the lack of ongoing visible formal dialogue between China and the rest of the world specific to AI governance. China’s approach to AI governance is distinct, prioritizing social control and national security, a very different set of priorities compared to Western countries, which often place a far greater emphasis on individual rights and freedoms. The greatest risk comes from the lack of an open and consistent dialogue between China and the West regarding AI governance, and it is particularly concerning given the borderless nature of the digital economy.
The challenge is that data flows and AI algorithms and bad actors don’t respect national borders. GenAI systems trained on data from one country can be deployed and used in another. Traditional regulatory frameworks are often based on geographic territoriality, GenAI LLMs, however, operate in a non-territorial space, making it difficult to apply these traditional frameworks effectively. DeepSeek, as an example of a GenAI LLM, exists in the digital realm, not tied to a specific physical territory per se, trained on massive datasets, that frequently originates from multiple countries, making it difficult to trace its provenance or apply specific national data protection laws during the training phase. The training process itself can happen anywhere, further blurring jurisdictional lines, demonstrating the interconnectedness of the global digital economy. This interconnectedness makes it essential to have shared understandings and standards for AI governance. Without them, we risk a chaotic and fragmented digital landscape. We need international cooperation to develop shared norms, standards, and enforcement mechanisms to ensure that AI is developed, deployed and used responsibly in a globally interconnected world because traditional regulatory frameworks are ill-equipped to deal with the challenges posed by these technologies. Ethical and Societal Implications
The ethical and responsible guidelines for society as a whole. extend far beyond the immediate gratification and greed of bad actors. Humanity is at a critical inflection point in technological evolution where the discontinuity of innovation has surpassed what social mores and legal frameworks can bear. The systems that are being created, especially those for nefarious purposes, are no longer simply tools but pre-nascent forms of an autonomous intelligence, threatening to propel the world into entirely new paradigms. Even if bad actors fail to truly grasp the potential long-term or unintended consequences of their actions, the potential vulnerabilities for society and sustainability must be proactively re-evaluated. Near-term vulnerabilities in sectors handling sensitive information or making high-stakes decisions are critical. Even seemingly simple instances of GenAI identity confusion, data poisoning, or bias manipulation through prompt engineering can have significant implications.
The convergence of quantum computing, IoT/IoE, and GenAI creates entirely new categories of risk that traditional models haven’t accounted for. Traditional risk models focus on financial and operational risks, but they may not adequately address the ethical implications of AI. Bias in GenAI algorithms, for example, is heavily researched because it could have systemic consequences, leading to discriminatory outcomes or exacerbating existing inequalities. However traditional risk calculus models are, in general, not adequately equipped to handle the systemic risks posed by the widespread, rapid, and sophisticated use of AI, especially in the context of converging technologies like quantum computing, autonomous systems, and the IoT/IoE. The world needs ethically driven proactive and holistic AI governance harmonized globally ahead of any build up or build out race because the risks of GenAI, quantum computing and the adoption of IoT/IoE also places powerful tools in the hands of bad actors who can now cause harm at massive scale, blazing speeds and new levels of sophistication.
[1] 孫子《兵法》Sun Tzu. The Art of War. Translated by Lionel Giles. Barnes & Noble Classics, 2003
[2] “Who? (are you really?)” Weiyee In, Jim Skidmore, Adam McElroy, February 4, 2025
[3] “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training” Evan Hubinger, Carson
Denison, Jesse Mu, Michael Lambert, Meg Tong, Monte MacDiarmid, January 10, 2024
[4] “Data Poisoning in LLMs: Jailbreak-Tuning and Scaling Trends” Dillon Bowen, Brendan Murphy, Will Cai, David Khachaturov, AdamGleave, Kellin Pelrine, 27 Dec 2024
Love this article? Embrace the full potential and become an esteemed full access member, experiencing the exhilaration of unlimited access to captivating articles, exclusive non-public content, empowering hands-on guides, and transformative training material. Unleash your true potential today!
Order the AI + HI = ECI book by Carsten Krause today! at cdotimes.com/book

Subscribe on LinkedIn: Digital Insider
Become a paid subscriber for unlimited access, exclusive content, no ads: CDO TIMES
Do You Need Help?
Consider bringing on a fractional CIO, CISO, CDO or CAIO from CDO TIMES Leadership as a Service. The expertise of CDO TIMES becomes indispensable for organizations striving to stay ahead in the digital transformation journey. Here are some compelling reasons to engage their experts:
- Deep Expertise: CDO TIMES has a team of experts with deep expertise in the field of Cybersecurity, Digital, Data and AI and its integration into business processes. This knowledge ensures that your organization can leverage digital and AI in the most optimal and innovative ways.
- Strategic Insight: Not only can the CDO TIMES team help develop a Digital & AI strategy, but they can also provide insights into how this strategy fits into your overall business model and objectives. They understand that every business is unique, and so should be its Digital & AI strategy.
- Future-Proofing: With CDO TIMES, organizations can ensure they are future-proofed against rapid technological changes. Our experts stay abreast of the latest AI, Data and digital advancements and can guide your organization to adapt and evolve as the technology does.
- Risk Management: Implementing a Digital & AI strategy is not without its risks. The CDO TIMES can help identify potential pitfalls and develop mitigation strategies, helping you avoid costly mistakes and ensuring a smooth transition with fractional CISO services.
- Competitive Advantage: Finally, by hiring CDO TIMES experts, you are investing in a competitive advantage. Their expertise can help you speed up your innovation processes, bring products to market faster, and stay ahead of your competitors.
By employing the expertise of CDO TIMES, organizations can navigate the complexities of digital innovation with greater confidence and foresight, setting themselves up for success in the rapidly evolving digital economy. The future is digital, and with CDO TIMES, you’ll be well-equipped to lead in this new frontier.
Do you need help with your digital transformation initiatives? We provide fractional CAIO, CDO, CISO and CIO services, do a Preliminary ECI and Tech Navigator Assessment and we will help you drive results and deliver winning digital and AI strategies for you!
Subscribe now for free and never miss out on digital insights delivered right to your inbox!

