AI RegulationAI ThreatCybersecurity

Who is liable now? Transforming Traditional Risk Calculus

March 3, 2025

Weiyee In, CDO TIMES, Executive Contribution Executive, CIO – Protego Trust Bank

Ken Peterson, CEO – Churchill & Harriman

(Special thanks to Brandon Nozaki Miller, Wee Dram)

Introduction

The withdrawal of the EU’s AI Liability Directive introduces several new facets of complexities for financial institutions navigating the convergence of generative artificial intelligence (GenAI), quantum computing, and Internet of Things/Everything (IoT/IoE), especially concerning security vulnerabilities and systemic risks. While the situation remains quite fluid, the withdrawal itself is concerning. The directive originally aimed to establish clear liability rules for AI-related damages, but its absence raises questions about accountability in the rapidly evolving technological landscape.  The convergence of GenAI with quantum computing, and IoT/IoE already presents a double-edged sword for financial institutions: immense potential alongside unprecedented security challenges. The opportunities have been lauded throughout the media and by cloud service providers that it has spurred an immediate response and land grabbing response from C-suites and Boards of major enterprises who had previously been constrained by the search for ROI.  

On the downside these technologies converging not only amplify existing threats by orders of magnitude but also introduce entirely novel attack vectors, demanding a massive paradigm shift from traditional risk management models to much more proactive and holistic governance risk and compliance frameworks. Historically, global industry and governments have often been ponderous when addressing strategic risks of a systemic nature. What is so often misunderstood is that the challenges are not only for the actual algorithmic models but the entire risk framework and workflow as a process and system as well as the scope.  The problem is structural as well as philosophical to the point that traditional risk calculus models generally do not adequately account for the potential systemic damage that a single institution’s use of GenAI could unwittingly cause to the global financial system.  This paper looks at the critical security vulnerabilities, threat vectors, and their cumulative impact of their convergence on the attack surface of financial institutions operating within this evolving technological landscape.  Withdrawal of AI Liability Directive

The abrupt withdrawal of the AI Liability Directive has significantly altered the landscape of legal recourse for those harmed by artificial intelligence. Instead of the intended easing of the burden of proof for victims, establishing liability for AI-driven damages, particularly within financial institutions, now falls back upon existing legal frameworks. These include the revised Product Liability Directive and various national laws, potentially creating a patchwork of legal interpretations across the EU.  The absence of the directive, aside from exacerbating the obvious fragmentation of the legal landscape where AI liability would revert to being governed by a “fragmented patchwork of 27 different national legal systems,” which could disadvantage European AI startups and SME, the less recognized challenge is that the existing frameworks are not properly equipped to handle the onslaught of AI liability and its ramifications.  . We are perhaps at the earliest stage of what may prove to be a major legal opportunity. 

This lack of a unified, EU-wide approach fosters considerable legal uncertainty. While global tech behemoths and multinational financial institutions may be better positioned to navigate this complex environment, European AI startups and small and medium-sized enterprises may face significant challenges. This uncertainty extends to smaller financial institutions grappling with the implications of generative AI, quantum computing, and the Internet of Things/Everything (IoT/IoE), where lines of responsibility are rapidly becoming increasingly blurred.  Additionally, other global industries have often historically looked to the financial services industry for best practices and guidance on such matters, without which the vulnerabilities compound.

In the wake of the directive’s withdrawal, a renewed focus on existing regulations, such as the GDPR and the EU AI Act might occur, however financial institutions must now prioritize the development and implementation of robust AI governance and risk management frameworks in a temporary vacuum. This heightened focus is crucial not just for mitigating potential liabilities related to data protection, cybersecurity, and the ethical deployment of AI systems but needs to be weighed and balanced against interpretations that the withdrawal is a mandate or opportunity for unfettered or rampant development or malfeasance and misfeasance.  The industry needs to focus urgently on the capabilities gaps between existing frameworks and their concomitant workflows and processes and the new realities that are coming in the convergence of GenAI with quantum computing and the IoT/IoE and how the withdrawal will increase the risks of malicious use of these converged technologies in ways that the traditional risk calculus and processes cannot cope.  Perhaps it is time for institutions that possess the required span of control to do what is required to amend their charters to provide such global leadership at this critical juncture. 

Transforming the Risk Calculus

This withdrawal of the EU AI Liability Directive casts a longer and more frightening shadow over the already ongoing transformation of traditional risk calculus and frameworks, particularly within the financial sector. In past generations technical and business leaders have “passed the torch” to the next generation. Today, we believe that the torch must be shared between the best young engineering and policy minds and those with a great deal of practical experience.  While the need for modernized risk management approaches driven by technological advancements and evolving business models remains, the legal uncertainty introduced by the directive’s absence significantly complicates matters. It may not halt the transformation, but it dramatically alters its momentum, course and priorities. The initial impetus for transforming risk frameworks included leveraging AI and machine learning for more sophisticated risk assessments but that has already proven to be a far more complicated undertaking than initially believed.  The withdrawal of the directive, rather than clarifying liability, has amplified the legal risks associated with AI deployment, not only for new GenAI LLM development and deployment but also for existing traditional AI and their use cases. This doesn’t in any way negate the need for advanced risk analytics; it makes it even more crucial. Financial institutions now need to not only assess the commercial, security, technical risks of GenAI (algorithmic bias, data security) adoption, integration and deployment but also the heightened legal risks stemming from the lack of a clear, harmonized liability framework.

The uncertainty created by relying on existing, potentially disparate national laws places a

much higher premium on robust AI governance and risk management and should be perceived as a wakeup call on how complex these issues are becoming from a societal and governance perspective. This means that the type of transformation, visions, directions and executions required shifts and the burden of development, prioritization and analysis shifts to the organization rather than merely demonstrating compliance to regulations. While innovative risk modeling always remains important, the immediate priority lies less with legal compliance and demonstrating due diligence, and more with leadership and governance.  New thinking and new tests are required that include a safe, secure expedited route to the implementation of new controls.  Financial institutions must face harsh capabilities assessments beyond people and skills gaps and prioritize building frameworks that can withstand not only legal scrutiny, focusing on detailed documentation, clear lines of responsibility but review policies, procedures and physical processes as well as technical or security infrastructure from a systemic perspective. A new generation of exercises that include pertinent tvendors and partners to vet models going into production.

The withdrawal also creates a potential divide. Large tech companies, with their extensive legal resources, security teams, and data governance may be better equipped to navigate this patchwork of regulations.  However, they need to respond to more fundamental issues not currently in their traditional risk calculus as well as the suitability of the workflows and frameworks for risk and whether they are systemically capable of supporting the convergence coming.  The withdrawal of the directive hasn’t stopped the transformation of risk management; it has redirected it or should have. The focus needs to shift from security, technical and analytical advancements in relative isolation from legal risk mitigation and compliance to a more holistic and synoptic review of the attack and impact surface. This necessitates a re-prioritization of resources and expertise, placing legal counsel and compliance in the same decision-making cohort as security, data and technology within financial institutions. The transformation continues, but in the face of legal uncertainty, a significantly more cautious security posture and defensively oriented approach becomes a prerequisite. Security Postures

The withdrawal of the EU AI Liability Directive, coupled with the ongoing transformation of risk management, creates a complex and evolving threat landscape with significant implications for security vulnerabilities, geopolitical and socioeconomic factors, and attack surfaces.  The legal uncertainty surrounding AI liability coupled with the convergence of GenAI with quantum computing, and IoT/IoE not only exacerbates existing security vulnerabilities but increases them exponentially by another order of magnitude.  The withdrawal is likely to be perceived as creating an open playing field for the development of GenAI where enterprises and developers may feel less constrained by regulatory obligations, potentially accelerating innovation in GenAI. This could lead to a surge in new applications and services that leverage AI capabilities without the fear of stringent liability repercussions. but it also raises significant concerns regarding the increased attack surface and associated security risks.  

The integration of GenAI into operational workflows further introduces a complex web of security risks, expanding the attack surface and creating new threat vectors[1]. One key

concern is the proliferation of vulnerabilities and the sheer size and scope of the attack surface management. The sheer complexity of GenAI integration, coupled with a potential lack of security prioritization among developers, creates numerous entry points for malicious actors. This vulnerability is underscored by the surge in GenAI-driven phishing attacks, demonstrating how these technologies are being weaponized to enhance existing attack strategies.  Financial institutions need to reprioritize the security risks, threat vectors, and increased attack surface associated with GenAI and proactively leverage advanced algorithms and machine learning to support the identification, disambiguation and categorization of interactions and activity that hits their ecosystem based on anomalies in behavior patterns.  GenAI solutions themselves and traditional infrastructure will be at higher risk because of the withdrawal of the AI Liability Directive.  Malicious actors will not only be able to increase their activities but also can leverage GenAI for much more sophisticated cyberattacks, and perceivably now with less liability.

Reconnaissance is Research

Even at the fundamental level of reconnaissance, malicious actors, including nationstates, are capable of leveraging GenAI to streamline both research and cyber operations. GenAI automates previously labor-intensive tasks, freeing attackers to execute more complex operations with greater speed and scale. Its real-time data analysis capabilities enable the identification of vulnerabilities and the deployment of highly precise attacks. Attackers can now automate intelligence gathering on potential targets, including vulnerability identification and network mapping. This automated reconnaissance significantly reduces attack preparation time and effort while simultaneously improving the accuracy and scale of gathered information. GenAI significantly reduces the time required for reconnaissance. Instead of days or weeks of manual effort, some tasks can likely be accomplished in minutes. GenAI acts as a force multiplier, enabling attackers to broaden and deepen their reach, comprehensively covering potential targets and attack vectors. Ultimately, GenAI empowers attacks targeting a larger number of victims with increased precision, reduced manual effort, and coordinated speed and scale.

Human versus  GenAI Reconnaissance

FeatureHuman (Manual) ReconnaissanceAI-Powered Reconnaissance
ScaleLimited by human capacity, time, and resources.Can process massive datasets quickly, covering more targets and attack vectors.
SpeedTime-consuming; requires manual effort for each step.Automates tasks, significantly reducing the time needed for intelligence gathering.
SophisticationRelies on human expertise; may miss subtle patterns.Employs machine learning algorithms to identify complex patterns and anomalies, adapting to defenses in real-time.
CostHigher due to human resources and time investment.Lower cost per unit of intelligence gathered due to automation.
AdaptabilityHuman analysts can adapt but require time to adjust strategies based on new information.Adapts rapidly using reinforcement learning to evade detection and optimize attack strategies.
EvasivenessLimited stealth; easily detectable due to predictable patterns.AI-driven reconnaissance can mimic normal network behavior, reducing the number of interactions with the target system and improving stealth.

The increased speed and scale of GenAI-powered attacks are compounded by an unprecedented level of sophistication. GenAI enhances attack sophistication through contextualization, adaptability, and evasiveness, all at a mass-customizable level down to an attack strategy that can be tailored to the time, location, platform, network and cultural, [psychological and emotional nuances of the target. GenAI-enabled attacks can be automatically tailored to individual targets—whether hundreds or thousands—adapting in real-time and exhibiting greater stealth than traditional attacks. Unlike human attackers who require time, effort, and ingenuity to adapt and evade, GenAI-enabled attacks minimize or even remove the need for communication with a command-and-control server, thereby enhancing stealth. Furthermore, GenAI mechanisms, going beyond simple bots, can learn and mimic the behavior and responses of compromised systems and networks. This allows GenAI algorithms to learn and adapt in real-time, evolving attack techniques, avoiding detection, synchronize to defender responses, and autonomously responding to observed changes in the system, target, or victim.

The integration of GenAI into the cybercriminal’s arsenal is already drastically changing the nature and impact of cyber threats, now further exacerbated by a perceived withdrawal of liability for AI-driven malfeasance and misfeasance. The resulting multifaceted impact(s) includes not only enhanced efficiency, heightened effectiveness, and automation of key stages in the attack lifecycle, but also the capacity for dynamic, near real-time evolution of the entire attack workflow at a fully coordinated and orchestrated level. It is a misconception to view GenAI’s impact on attacker efficiency as one-dimensional. While it’s true that GenAI allows attackers to automate processes previously requiring laborious manual effort—such as crafting malware or exploits—dramatically reducing time and effort, the impact goes far beyond simple automation.

GenAI doesn’t just enable attackers to generate malicious code faster; it allows them to do so at an exponentially increased scale and speed, with mass customization tailored precisely to exploit specific target environments and platforms all capable of being orchestrated dynamically to exploit multiple potential vulnerabilities simultaneously or with unique disruptive cadence. This results in a surge of customized attacks, each optimized for a particular victim or vulnerability. Leveraging GenAI, attackers can identify and exploit weaknesses across a vastly expanded range of systems and software, including the increasingly vulnerable edge and IoT/IoE devices. Consequently, organizations must now defend an exponentially larger, more heterogeneous, and more dynamic attack surface against a swarm of mass-customized threats, making detection and prevention significantly more challenging. GenAI empowers cybercriminals to craft qualitatively more sophisticated attacks, including phishing campaigns, malware, and social engineering schemes, often without even requiring prior subject matter expertise in fields like psychology or emotional intelligence.

GenAI can create such highly personalized, industry-specific content that convincingly mimics legitimate communication. It has turned the tables on many industries. This capability allows attackers to trick recipients into revealing sensitive information or downloading malware, effectively leveling the playing field and making even previously impenetrable industries vulnerable. The combination of realistic deepfakes and targeted subject matter expertise significantly amplifies the effectiveness of social engineering attacks, demanding a new level of sophistication in defense strategies. The very heterogeneity and fragmentation that once served as a barrier to entry for cybercriminals attempting to break into industries with high levels of domain expertise now makes these industries even more susceptible to attacks. These same characteristics, of high subject matter expertise, diversity, heterogeneity and disparate locations and cultures which previously hindered phishing, man in the middle or brute force campaigns, now complicate defense efforts across the supply chain.

Along with the removal of domain knowledge requirements as a barrier targeted domain relevant social engineering into harder industries is the advent of GenAI being able to be used to create malware that dynamically adapts and evolves to evade detection by traditional antivirus and malware detection tools. Because GenAI can automate most aspects of hacking, allowing cybercriminals to launch such large-scale attacks at levels of complexity and difficulty to detect and counter the reprioritization of security in the face of withdrawing liability becomes paramount. This increases the volume and speed of attacks, overwhelming traditional security measures. This makes it more difficult for organizations to protect themselves against malware attacks.

In essence, AI is transforming the cyber threat landscape by empowering attackers to operate with greater speed, efficiency, and effectiveness. This trend necessitates a corresponding evolution in defensive strategies, with organizations needing to embrace AIdriven security solutions to effectively counter the emerging wave of AI-powered attacks.

Furthermore, AI itself is being leveraged to create more sophisticated cyberattacks. GenAI’s ability to generate highly convincing phishing emails and deepfakes empowers social engineering tactics, rendering traditional defenses less effective. The rapid adoption of GenAI has also led to a significant increase in API vulnerabilities, as these crucial connectors between applications and services become prime targets for exploitation.

The expanded access and capabilities associated with GenAI integration also heighten the risk of insider threats. Employees, whether intentionally or unintentionally, can misuse these powerful tools, further complicating security efforts. Data poisoning, where attackers introduce manipulated data during the training phase of large language models (LLMs), poses another serious threat, potentially creating backdoors within the model itself. Prompt injection attacks, which manipulate the outputs of GenAI services to bypass security measures or gain unauthorized access to sensitive data, represent another significant vulnerability. Finally, the complex and often opaque supply chain for GenAI applications creates a vast and challenging attack surface for malicious actors.

Fundamental Changes in Risk Management

The withdrawal of the proposed EU AI Liability Directive and the rapid evolution of risk management practices presents a complex and demanding landscape for organizations for attack surface management. Traditional, reactive security measures, postures and risk calculus are no longer adequate in this environment. A fundamental shift to proactive security integrated to risk management is essential but it is complicated by the technologies, processes and infrastructure supporting these risk calculus frameworks.  Robust AI governance frameworks are paramount for managing the inherent risks associated with AI adoption and the convergence of multiple technologies. This necessitates establishing clear lines of responsibility for AI development, deployment, and oversight, understanding the implications against quantum as well as IoT/IoE and how this aligns with security postures going into Edge and proactive threat management. 

Organizations must implement ethical guidelines for AI usage, addressing concerns including but not limited to bias in algorithms, data privacy, and transparency in decisionmaking. Furthermore, ensuring compliance with evolving regulations, such as those related to data protection and AI ethics, has become not only crucial but urgent. This might involve implementing explainable AI (XAI) techniques to understand how AI systems arrive at their conclusions, facilitating audits and demonstrating compliance or in certain sectors developing a Root of Trust (RoT).

Effective collaboration and information sharing become indispensable for staying ahead of rapidly evolving AI-driven threats. Organizations must actively share threat intelligence, including details of GenAI-powered attacks and effective mitigation strategies, within their respective industries and with government agencies. This collaborative approach fosters the development of more robust and comprehensive defense strategies for the longer term. For instance, sharing anonymized data on attack vectors and malware signatures can help security vendors improve their detection capabilities and enable organizations to proactively patch vulnerabilities. Joint exercises and simulations can also help organizations prepare for and respond to complex AI-driven attacks.

The withdrawal of the AI Liability Directive, coupled with the dynamic nature of risk management, necessitates a swift and comprehensive adaptation by organizations. Strengthening the overall security posture, prioritizing a more holistic and robust AI governance, and embracing a proactive approach to risk management are not merely best practices, but have become essential requirements for survival. The legal uncertainty surrounding AI liability underscores the importance of rigorous compliance and due diligence. Simultaneously, the constantly evolving threat landscape demands continuous vigilance and innovation in security strategies. Organizations must also invest in training and development to enhance their cybersecurity teams’ expertise in AI-related threats and defenses as well as fill skills gaps across the organization. They must also foster a culture of security awareness throughout the organization, educating employees about the risks of AI-powered social engineering attacks and phishing campaigns.

Defense in Depth +1

Only through such a multi-faceted and proactive approach can organizations effectively navigate the challenges, capitalize on the opportunities presented by the evolving AI landscape and leverage AI to combat bad actors.  On the positive side this means organizations would be leveraging the very technologies that create new risks – GenAI and machine learning – to identify and mitigate threats before they can be exploited. For example, anomaly detection algorithms can be trained on longitudinal data sets of normal network traffic patterns to identify suspicious activity indicative of an impending GenAIdriven attack, such as unusual data exfiltration or rapid changes in system resource utilization. Predictive modeling can analyze threat intelligence data, including indicators of compromise (IOCs) and attack patterns, to anticipate and proactively block potential attacks before they can penetrate defenses.

Several machine learning techniques offer powerful tools for enhancing network defense and threat detection. Support Vector Machines (SVMs) can be implemented by first establishing a baseline of normal network behavior, encompassing traffic patterns, user activity, and system performance metrics. This baseline data trains the SVM model, enabling it to analyze incoming network traffic in real-time and detect anomalies that deviate from established patterns. For example, a sudden spike in outbound traffic during off-hours could be flagged as suspicious. SVMs also contribute to malware classification by analyzing files and processes, distinguishing between benign and malicious entities based on features extracted from static and dynamic analysis. Integrating SVM models into endpoint protection solutions enhances malware detection capabilities.

Random Forests provide another valuable approach. Organizations can deploy Random Forest algorithms to assess potential system vulnerabilities by gathering data on attributes like software versions, configuration settings, and known vulnerabilities taking an inventory at a deeper level. Training the model on historical incident data also allows it to classify and prioritize vulnerabilities based on their likelihood of exploitation, enabling security teams to focus remediation efforts on the most critical risks. Continuously updating Random Forests with new data from security incidents or threat intelligence feeds ensures adaptation to evolving threats and improves predictive accuracy over time.

Cluster analysis techniques, such as K-means clustering, can group similar entities within the network based on attributes like IP addresses and user behavior. Regular analysis of logs and user activity allows security teams to identify clusters exhibiting similar behavior. By identifying clusters with unusual behavior, such as multiple failed login attempts from a single IP address, security teams can pinpoint potential attack vectors or compromised accounts. Establishing behavioral baselines through cluster analysis allows organizations to detect deviations indicative of insider threats or external attacks.

Finally, Transformers, known for their ability to process sequential data, offer powerful capabilities for log analysis. By training transformer models on historical log data from sources like firewalls and intrusion detection systems, organizations can enhance their ability to detect subtle anomalies that may signal an ongoing attack. For example, a transformer could identify unusual access patterns that deviate from typical user behavior. Transformers can also be used for natural language processing (NLP) to analyze unstructured data sources like emails and chat messages, detecting phishing attempts or social engineering attacks. These combined techniques offer a multi-layered approach to threat detection and network defense and support both internal and external audits.

The unfortunate challenge with all of these is precisely that the efficacy of these tools for defense also can be turned around by bad actors to the detriment of the organization.  Bad actors can leverage their own models for criminal activities or infiltrate those of the organization.  The withdrawal of the EU AI Liability Directive again presents additional challenges for industry.  GenAI systems are inherently susceptible to injection attacks, where malicious code or poisonous data can be inserted into prompts or via data inputs, potentially compromising the underlying system.  Taking away liabilities, even for a short term without proactive AI governance opens a window of rampant and rapid adoption of GenAI that will compound the dramatic rise in API vulnerabilities. APIs serve as critical connectors between various applications and services and often have weak authentication mechanisms that can allow attackers to gain unauthorized access to GenAI systems and data, making them prime targets for exploitation.  

GenAI systems already are at risk of inadvertently exposing sensitive data if not properly secured, in the flurry of deployments with no clarity on liability the likelihood of misconfigured GenAI systems rises.  There are also a growing plethora of identification and authentication failures culminating in “identity confusion” and potential risks of “sleeper agents” in GenAI LLMs.  Integrating GenAI introduces new risks that need to be addressed ahead of development and deployment but also includes the infrastructure of the systems on the backend and the connections to third parties for either data or process calls and extend from operational risks to compliance and ultimately to security. IoT/IoE Data Generation Deluge

The convergence of IoT/IoE, generative AI (GenAI), and quantum computing presents a paradigm shift of such magnitude for financial services, offering both unprecedented opportunities alongside significant, novel risks. The proliferation of IoT/IoE devices is generating an unprecedented deluge of data, far exceeding traditional transaction details. While IoT/IoE provide granular, real-time data streams far richer than traditional transaction details, and GenAI LLMs offer powerful analytical and predictive capabilities, the traditional risk calculus models are fundamentally inadequate for managing the complexities of this new landscape. The risk is not merely operational or reputational for individual institutions; the interconnectedness of the global financial system means that even a single institution’s seemingly innocuous use of GenAI could potentially trigger systemic damage. 

Traditional financial services rely heavily on historical transaction data. However, IoTenabled POS terminals offer real-time data streams that exceed traditional transaction details. To ethically, securely and effectively harness this data, financial institutions need to significantly improve their operational efficiency, customer understanding, and perhaps most of all risk management capabilities as they focus GenAI solutions on this deluge of data. While GenAI amplifies both opportunities and risks, the industry focus has largely shifted toward rapid deployment of GenAI chatbots and LLMs and in all likelihood will step that up with the withdrawal of the EU AI Liability Directive. GenAI can analyze complex datasets, identify hidden patterns, and generate predictive models far faster than traditional statistical methods, enabling personalized services, improved fraud detection, and more accurate risk assessments. However, the sheer volume, velocity and variability of this data, coupled with an overall lack of explainability for most GenAI LLMs, can massively amplify risks.

The sheer volume, velocity and variability of data generated by IoT/IoE devices already dwarf traditional data sources, making it incredibly difficult for traditional risk models to process and analyze data in real-time, which is essential for identifying and mitigating systemic risks. This growing data tsunami will overwhelm existing architectures and risk frameworks. Billions of IoT/IoE devices, from smart home appliances and wearables to industrial equipment and connected vehicles, constantly generate data and that will only increase as application-to-application and machine-to-machine interaction begins to overshadow man-to-machine interactions. While each device may produce relatively small amounts of data, the aggregate volume is already daunting and will only grow. Simply storing and managing this data requires scalable and cost-effective solutions capable of handling petabytes or exabytes efficiently and each node becomes part of the attack surface.

Even a basic example, retail financial services via POS terminals, illustrates the challenge. The rich data streams from these terminals create such a deluge of multifaceted data that new infrastructures, analytical models, and risk frameworks are required. The technical infrastructure and analytical methodologies needed to leverage this information for enhanced consumer behavioral analytics, regulatory compliance (BSA/AML/CTF), realtime transaction monitoring, and sophisticated fraud detection are not trivial. The data’s volume, variety, and velocity demand new data management and processing paradigms, along with a critical understanding of data privacy, security, and scalability.

Modern POS terminals generate a multifaceted dataset, including transaction amount, millisecond-precise timestamp, Merchant Category Code (MCC), card type (debit, credit, prepaid), payment method (contactless, chip, magnetic stripe), and authorization code. GPS coordinates provide crucial contextual information, enabling the identification of spending patterns within specific geographic regions and the detection of anomalous transactions in geographically improbable locations relative to the cardholder’s known habits. IP address, network provider, and connection status offer insights into potential network-based attacks, disruptions, or connectivity issues. A unique device identifier tracks individual terminal activity, crucial for identifying compromised terminals exhibiting unusual transaction patterns or malfunctions. These data points all form the basis of traditional transaction monitoring and fraud management.

The data generated by millions of connected POS terminals creates a massive influx of data and significant storage challenges. This volume necessitates highly scalable storage architectures, driving financial institutions to adopt massive data lakes or cloud-based object storage for cost-effective storage of raw datasets. Processed, cleaned, and aggregated data resides in a data warehouse, traditionally using an RDBMS or a cloudbased data warehousing service or distributed file systems capable of managing exabytes. This facilitates efficient querying and analysis for business intelligence and reporting.

Traditional databases and data structures may no longer be suitable for the volume, velocity, and variety of IoT/IoE data. Cloud-based storage and distributed file systems are already often employed, but they introduce their own security and management challenges. Ingesting data from numerous high-velocity devices can create massive bottlenecks and data integrity or provenance issues. Efficient data ingestion mechanisms, such as message queues and distributed data collectors, are essential but may not be sufficient. Many IoT/IoE devices generate real-time data continuously. This high velocity requires processing and analysis with minimal latency for effective risk management. Traditional batch processing is often too slow. Real-time data streams necessitate streaming analytics platforms that process data as it arrives, using techniques like windowing, aggregation, and filtering.

IoT/IoE data is highly heterogeneous, from diverse devices and formats, making it challenging to integrate and standardize for traditional risk models, which rely on structured data. The data now often includes time-series (sensor readings), geospatial (location), multimedia (images, videos), and textual (logs, social media). This heterogeneity complicates uniform processing and analysis. Integrating IoT/IoE data from diverse sources presents significant security challenges due to the complexity of ensuring compatibility. This necessitates advanced data processing like ETL tailored for IoT/IoE.  IoT/IoE data pathways and workflows also present numerous vulnerabilities. Devices often transmit sensitive data across unsecured networks or store it without robust encryption, leaving it susceptible to interception. Default passwords and weak authentication mechanisms are easily compromised. Outdated firmware and software create susceptibility to exploits.

Integrating GenAI LLMs and bots into attacks against IoT/IoE dramatically escalates vulnerabilities. GenAI tools remove barriers to attack, enabling anyone to become a cybercriminal, while enhancing attack sophistication, automating, personalizing, and evolving tactics. The scale, speed, and sophistication create an exponentially greater threat. GenAI-powered bots can automate hacking, enabling large-scale attacks surpassing manual capabilities. GenAI can analyze complex patterns, identifying previously unknown vulnerabilities. Attackers can generate novel threats in real-time, adapting tactics based on feedback.

This rapid evolution could allow malware to mutate or evolve in real time faster than traditional detection tools could respond, posing a significant challenge to security systems attempting to promptly identify new threats.  The impact on IoT/IoE security becomes threefold. First, the expanding landscape of IoT devices broadens and deepens the attack surface, increasing the potential entry points for malicious actors. Second, the adaptive nature of AI-generated threats introduces complexity in detection, often overwhelming traditional security tools predicated and designed for human cybercriminal activity, especially at the retail level of the ecosystem. 

Finally, these challenges necessitate much more proactive defense strategies because of the sheer scale, speed and growing sophistication of coming GenAI assisted attacks. The challenge for the industry is not only to adopt more proactive and more holistic approaches or security postures but to recognize that a significant portion of the existing infrastructure and traditional calculus may not stand up to GenAI driven onslaughts. The withdrawal of the EU AI Liability Directive not only creates a window of rampant unfettered development activity which will proliferate technological end points that need to be secured, but the convergence of GenAI and IoT/IoE will overwhelm much of the traditional risk calculus and frameworks. The lack of governance and controls would leave not only major institutions vulnerable but society as a whole.


[1] Gartner, Emerging Tech: Top 4 Security Risks of GenAI, Lawrence Pingree, Swati Rakheja, Leigh McMullen, Akif Khan, Mark Wah, Ayelet Heyman, Carl Manion, 10 August 2023

Love this article? Embrace the full potential and become an esteemed full access member, experiencing the exhilaration of unlimited access to captivating articles, exclusive non-public content, empowering hands-on guides, and transformative training material. Unleash your true potential today!

Order the AI + HI = ECI book by Carsten Krause today! at cdotimes.com/book

Subscribe on LinkedIn: Digital Insider

Become a paid subscriber for unlimited access, exclusive content, no ads: CDO TIMES

Do You Need Help?

Consider bringing on a fractional CIO, CISO, CDO or CAIO from CDO TIMES Leadership as a Service. The expertise of CDO TIMES becomes indispensable for organizations striving to stay ahead in the digital transformation journey. Here are some compelling reasons to engage their experts:

  1. Deep Expertise: CDO TIMES has a team of experts with deep expertise in the field of Cybersecurity, Digital, Data and AI and its integration into business processes. This knowledge ensures that your organization can leverage digital and AI in the most optimal and innovative ways.
  2. Strategic Insight: Not only can the CDO TIMES team help develop a Digital & AI strategy, but they can also provide insights into how this strategy fits into your overall business model and objectives. They understand that every business is unique, and so should be its Digital & AI strategy.
  3. Future-Proofing: With CDO TIMES, organizations can ensure they are future-proofed against rapid technological changes. Our experts stay abreast of the latest AI, Data and digital advancements and can guide your organization to adapt and evolve as the technology does.
  4. Risk Management: Implementing a Digital & AI strategy is not without its risks. The CDO TIMES can help identify potential pitfalls and develop mitigation strategies, helping you avoid costly mistakes and ensuring a smooth transition with fractional CISO services.
  5. Competitive Advantage: Finally, by hiring CDO TIMES experts, you are investing in a competitive advantage. Their expertise can help you speed up your innovation processes, bring products to market faster, and stay ahead of your competitors.

By employing the expertise of CDO TIMES, organizations can navigate the complexities of digital innovation with greater confidence and foresight, setting themselves up for success in the rapidly evolving digital economy. The future is digital, and with CDO TIMES, you’ll be well-equipped to lead in this new frontier.

Subscribe now for free and never miss out on digital insights delivered right to your inbox!

Carsten Krause

I am Carsten Krause, CDO, founder and the driving force behind The CDO TIMES, a premier digital magazine for C-level executives. With a rich background in AI strategy, digital transformation, and cyber security, I bring unparalleled insights and innovative solutions to the forefront. My expertise in data strategy and executive leadership, combined with a commitment to authenticity and continuous learning, positions me as a thought leader dedicated to empowering organizations and individuals to navigate the complexities of the digital age with confidence and agility. The CDO TIMES publishing, events and consulting team also assesses and transforms organizations with actionable roadmaps delivering top line and bottom line improvements. With CDO TIMES consulting, events and learning solutions you can stay future proof leveraging technology thought leadership and executive leadership insights. Contact us at: info@cdotimes.com to get in touch.

Leave a Reply