The Undeniable Need for Secure AI
There being a need to protect Americans (probably humanity globally) and drive the world towards “Safe, Secure, and Trustworthy Artificial Intelligence1” is indisputable. However some of the potential challenges in executing and managing the risks of artificial intelligence (AI), especially in a broader security and risk lens need to be spelled out because many of the risks may become more complex over time. Setting aside many of the fundamental ethical questions around consent, government transparency, and maintaining an appropriate balance between public good versus American business interests, there are also some practical issues to consider. All efforts aimed to foster responsible AI are commendable, but because the digital economy is truly global, a lack of regulatory coherence globally risks significant inefficiencies, confusion, costs, and barriers to innovation.
Summary of President Biden’s Executive Order in this Context:
President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence aims to position the United States at the forefront of AI technology while managing its risks. Here’s a summary based on the provided link:
- New Standards for AI Safety and Security:
- Developers of powerful AI systems are required to share safety test results and critical information with the U.S. government.
- Standards, tools, and tests will be developed to ensure AI systems’ safety, security, and trustworthiness. Notable agencies involved include the National Institute of Standards and Technology and the Department of Homeland Security.
- Measures against AI’s use in engineering dangerous biological materials will be established, with new standards for biological synthesis screening.
- Protections against AI-enabled fraud and deception, including standards for detecting AI-generated content and authenticating official content, will be established.
- An advanced cybersecurity program will be created to develop AI tools to find and fix vulnerabilities in critical software.
- A National Security Memorandum will be developed to ensure the safe, ethical, and effective use of AI by the U.S. military and intelligence community, and to counter adversaries’ military use of AI1.
- Protecting Americans’ Privacy:
- The Executive Order calls for bipartisan data privacy legislation to better protect Americans’ privacy, especially children.
- Federal support will be prioritized for the development and use of privacy-preserving techniques, including those that use cutting-edge AI.
- Privacy-preserving research and technologies will be strengthened, with a Research Coordination Network funded to advance rapid breakthroughs and development.
- Agencies will evaluate how they collect and use commercially available information, with strengthened privacy guidance to account for AI risks, particularly focusing on information containing personally identifiable data1.
This Executive Order builds upon previous actions and includes voluntary commitments from 15 leading companies for the safe, secure, and trustworthy development of AI as part of a comprehensive strategy for responsible innovation1.
Moreover, the Executive Order aligns with the broader objectives of ensuring that America leads in managing AI’s promise and risks, advancing equity and civil rights, standing up for consumers and workers, promoting innovation and competition, and advancing American leadership globally1.
Navigating AI Risks and Challenges in That Context
The most significant and immediate challenge for the financial services and technology industries is that any Executive Order does not require congressional approval, but they are mandates and compel federal agencies like National Institute of Standards and Technology (NIST), Department of Energy (DOE), Department of Homeland Security (DHS) to follow the order’s directives. While an order should not contradict or supersede any existing laws, because the order has initiated likely policy changes, the potential impact is significant. Orchestrating the most comprehensive set of measures ever taken to protect Americans from potential risks associated with AI systems without having either legislation or AI standards and an AI governance framework in place is pointing out the dire need for the development of said standards and tools and the requirement of sharing of test results for the most high-risk systems.
Executive Orders and Industry Implications
Typically NIST follows a rigorous process for developing standards and controls to ensure that the standards are comprehensive, effective, and widely adopted. Assuming a project team has already been active at the initial planning and scoping of the defined problem, a thorough analysis to identify the security challenges and risks associated with AI technologies, gaps with existing standards with plenty of stakeholder engagement. Drafting standards and guidelines, coordinating public review and comment, testing and validation, etc. all take time and may be painfully juxtaposed with the lightspeed at which AI and concomitant baskets of technologies are being adopted driven counter-intuitively by consumer level adoption.
The Rigorous Journey Towards Standardization
In practical terms businesses involved in either developing models or development and implementation of AI models may easily find consumer driven inertia pulling or pushing them away from moving targets of standards and requirements that still have not been set. For companies who are currently building or developing models it is critical to bear in mind that without much more clarity on how companies and industries would share their safety test results and other information with the government the level of traceability, auditability and accountability grows by orders of magnitude. Moreover it is critical for developers and business stakeholders to understand that these three are fundamentally different concepts that will all play into the Executive Order at an actual implementation level.
Unpacking Traceability, Auditability, and Accountability
All three conceptually are essential for promoting transparency, ethical AI governance, and compliance with regulations. Each plays an important role in addressing ethical considerations and mitigating potential risks in AI model development, requiring audit logs, documentation and record-keeping to maintain a clear history of data, model, and decision-related information. The difference is that traceability focuses on tracking the history and changes made to data, models, and decisions, providing a record of lineage, provenance and versioning. Traceability, however, is a much broader concept that can also apply to objects, processes, and entities, while data lineage and data provenance are more specific to data. Auditability is primarily focused on inspection, review, and transparency, and in the case of AI, the entire AI model’s process and components, whereas accountability is about ensuring that individuals or teams are responsible for specific aspects of AI model development.
Cross-jurisdictional Data Dilemmas and Global Trade Implications
One foreseeable concern is already looming on the global AI horizon: potential conflicting or inconsistent standards (and regulations) between regions (US, EU, UK, China, Asia, Japan, etc.) could make it difficult for global companies to comply. Security, data privacy and data governance teams today already struggle with consent and transparency. The Executive Order promotes transparency in government AI use, but no mandated transparency from the private sector (as yet) where the UK requires organizations to be transparent about their use of AI and the EU requires AI systems to be transparent to users, including information on system capabilities, limitations, and purpose. Even within the EU’s General Data Protection Regulation (GDPR) Article 22 specifically addresses automated individual decision-making, including profiling and spells out specific use cases and contractual requirements (consent) with suitable safeguards.
CDO TIMES Bottom Line: Bridging the Global AI Governance Gap
From an implementation perspective the implications of just transparency and consent create a myriad of data governance and security issues. Details and interpretations of definitions of prohibited practices, risk classification frameworks, and required documentation/testing may differ across regimes depending upon the final legislation. Potential issues in data sharing and transfers between jurisdictions, especially if data localization rules prohibit sharing required test data as required by the Executive Order, leading not only to regulatory arbitrage and favorable regime shopping across borders but also potential trade disputes and intellectual property contestations. As nothing happens in isolation and too many interconnections and interdependencies ultimately impact businesses in a global digital economy, the challenges over time with continued fragmentation of standards, regulations, and best practices, the security and governance requirements grow with an attack surface that becomes broader and deeper.
Love this article? Embrace the full potential and become an esteemed full access member, experiencing the exhilaration of unlimited access to captivating articles, exclusive non-public content, empowering hands-on guides, and transformative training material. Unleash your true potential today!
In this context, the expertise of CDO TIMES becomes indispensable for organizations striving to stay ahead in the digital transformation journey. Here are some compelling reasons to engage their experts:
- Deep Expertise: CDO TIMES has a team of experts with deep expertise in the field of Digital, Data and AI and its integration into business processes. This knowledge ensures that your organization can leverage digital and AI in the most optimal and innovative ways.
- Strategic Insight: Not only can the CDO TIMES team help develop a Digital & AI strategy, but they can also provide insights into how this strategy fits into your overall business model and objectives. They understand that every business is unique, and so should be its Digital & AI strategy.
- Future-Proofing: With CDO TIMES, organizations can ensure they are future-proofed against rapid technological changes. Their experts stay abreast of the latest AI advancements and can guide your organization to adapt and evolve as the technology does.
- Risk Management: Implementing a Digital & AI strategy is not without its risks. The CDO TIMES can help identify potential pitfalls and develop mitigation strategies, helping you avoid costly mistakes and ensuring a smooth transition.
- Competitive Advantage: Finally, by hiring CDO TIMES experts, you are investing in a competitive advantage. Their expertise can help you speed up your innovation processes, bring products to market faster, and stay ahead of your competitors.
By employing the expertise of CDO TIMES, organizations can navigate the complexities of digital innovation with greater confidence and foresight, setting themselves up for success in the rapidly evolving digital economy. The future is digital, and with CDO TIMES, you’ll be well-equipped to lead in this new frontier.
Do you need help with your digital transformation initiatives? We provide fractional CAIO, CDO, CISO and CIO services and have hand-selected partners and solutions to get you started!
Subscribe now for free and never miss out on digital insights delivered right to your inbox!