Amanda Brock on the CAIO Connect Podcast with Sanjay Puri: 'Why Open Source AI Is About Transparency, Not Risk' – The National Law Review
46
New Articles
CAIO Connect Podcast
Amanda Brock, CEO of OpenUK, with Sanjay Puri, President, CAIO COnnect
On the CAIO Connect Podcast, Amanda Brock argues AI security isn’t about open vs closed—but transparency, licensing, and global collaboration.
WASHINGTON, DC, UNITED STATES, February 26, 2026 /EINPresswire.com/ — “Is open source AI less secure — or simply more transparent?”
That was the central question explored when Amanda Brock, CEO of OpenUK, joined host Sanjay Puri live at the India AI Impact Summit on the CAIO Connect Podcast. In a wide-ranging and candid discussion, Amanda Brock challenged some of the biggest assumptions policymakers and Fortune 500 executives make about AI, security, and openness.
Her message was clear: the security issues we face in AI are not about open source. They are about software — and they are universal.
Open vs. Closed: The Transparency Divide
Amanda Brock pushed back on the common claim that open source introduces more security risk. In her view, both open and proprietary systems contain vulnerabilities. The difference lies in visibility.
In open-source environments, she explained, “We wash our dirty linen in public.” When there’s a flaw, everyone can see it. But that transparency triggers a powerful response—communities rally to fix issues quickly. Many eyes make bugs shallow.
Closed systems, on the other hand, operate like black boxes. Users often have limited insight into how models function internally or how quickly vulnerabilities are disclosed. The obligation to inform customers can be slower and less transparent.
The takeaway from Amanda Brock’s appearance on the CAIO Connect Podcast? Security is not a function of openness. It is a function of accountability.
The Llama 2 Moment: A Shift in AI History
Amanda Brock also reflected on why Meta’s release of Llama 2 in July 2023 marked a turning point. OpenUK partnered on the launch because it was framed as “open innovation,” though not fully open source.
Llama 2 opened model weights but not training data, and its license imposed certain restrictions. Even so, it represented a significant shift. For the first time, the broader developer community had meaningful access to a powerful large language model.
According to Amanda Brock, that release — along with DeepSeek’s launch in January 2025 — may be remembered as two defining milestones in AI openness.
The reason? Open access accelerates iteration. When developers can inspect, test, and adapt models, innovation moves faster.
What Does “Open” Really Mean?
One of Amanda Brock’s most important clarifications was about terminology. Rather than using the phrase “open source AI,” she prefers “AI openness.”
Why? Because AI is not a single component. It includes model weights, training data, algorithms, and licenses. Each element can be open, partially open, or closed.
Equally critical is how something is licensed. A model may appear open, but restrictive commercial terms can significantly limit real-world use. Policymakers and business leaders must examine both what is open and how it is governed.
DeepSeek, China, and Lean Innovation
The conversation also touched on China’s open-source trajectory. DeepSeek’s open-weight model, released under an MIT license, demonstrated how community-driven iteration can rapidly enhance performance. Within days, developers retrained versions using alternative datasets.
Amanda Brock noted that Chinese engineers tend to build lightweight, compute-efficient systems — an approach that could benefit emerging markets and the Global South, where access to large-scale compute is limited.
Smaller, edge-based models may ultimately prove more sustainable and more accessible worldwide.
A Message to World Leaders
Closing the conversation, Amanda Brock delivered a simple but powerful message to policymakers: if you want to shape AI’s future responsibly, engage directly with open-source communities.
Openness is not a slogan. It is a practice grounded in licensing, governance, and collaboration. Decisions about AI should not be made in closed rooms with only a handful of CEOs. They should include the builders, contributors, and global communities who sustain the ecosystem.
As Amanda Brock made clear on the CAIO Connect Podcast, the real question is not whether AI will be open or closed. It is whether it will be transparent enough to earn trust.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2026 National Law Forum, LLC
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
CAIO Connect Podcast
Amanda Brock, CEO of OpenUK, with Sanjay Puri, President, CAIO COnnect
On the CAIO Connect Podcast, Amanda Brock argues AI security isn’t about open vs closed—but transparency, licensing, and global collaboration.
WASHINGTON, DC, UNITED STATES, February 26, 2026 /EINPresswire.com/ — “Is open source AI less secure — or simply more transparent?”
That was the central question explored when Amanda Brock, CEO of OpenUK, joined host Sanjay Puri live at the India AI Impact Summit on the CAIO Connect Podcast. In a wide-ranging and candid discussion, Amanda Brock challenged some of the biggest assumptions policymakers and Fortune 500 executives make about AI, security, and openness.
Her message was clear: the security issues we face in AI are not about open source. They are about software — and they are universal.
Open vs. Closed: The Transparency Divide
Amanda Brock pushed back on the common claim that open source introduces more security risk. In her view, both open and proprietary systems contain vulnerabilities. The difference lies in visibility.
In open-source environments, she explained, “We wash our dirty linen in public.” When there’s a flaw, everyone can see it. But that transparency triggers a powerful response—communities rally to fix issues quickly. Many eyes make bugs shallow.
Closed systems, on the other hand, operate like black boxes. Users often have limited insight into how models function internally or how quickly vulnerabilities are disclosed. The obligation to inform customers can be slower and less transparent.
The takeaway from Amanda Brock’s appearance on the CAIO Connect Podcast? Security is not a function of openness. It is a function of accountability.
The Llama 2 Moment: A Shift in AI History
Amanda Brock also reflected on why Meta’s release of Llama 2 in July 2023 marked a turning point. OpenUK partnered on the launch because it was framed as “open innovation,” though not fully open source.
Llama 2 opened model weights but not training data, and its license imposed certain restrictions. Even so, it represented a significant shift. For the first time, the broader developer community had meaningful access to a powerful large language model.
According to Amanda Brock, that release — along with DeepSeek’s launch in January 2025 — may be remembered as two defining milestones in AI openness.
The reason? Open access accelerates iteration. When developers can inspect, test, and adapt models, innovation moves faster.
What Does “Open” Really Mean?
One of Amanda Brock’s most important clarifications was about terminology. Rather than using the phrase “open source AI,” she prefers “AI openness.”
Why? Because AI is not a single component. It includes model weights, training data, algorithms, and licenses. Each element can be open, partially open, or closed.
Equally critical is how something is licensed. A model may appear open, but restrictive commercial terms can significantly limit real-world use. Policymakers and business leaders must examine both what is open and how it is governed.
DeepSeek, China, and Lean Innovation
The conversation also touched on China’s open-source trajectory. DeepSeek’s open-weight model, released under an MIT license, demonstrated how community-driven iteration can rapidly enhance performance. Within days, developers retrained versions using alternative datasets.
Amanda Brock noted that Chinese engineers tend to build lightweight, compute-efficient systems — an approach that could benefit emerging markets and the Global South, where access to large-scale compute is limited.
Smaller, edge-based models may ultimately prove more sustainable and more accessible worldwide.
A Message to World Leaders
Closing the conversation, Amanda Brock delivered a simple but powerful message to policymakers: if you want to shape AI’s future responsibly, engage directly with open-source communities.
Openness is not a slogan. It is a practice grounded in licensing, governance, and collaboration. Decisions about AI should not be made in closed rooms with only a handful of CEOs. They should include the builders, contributors, and global communities who sustain the ecosystem.
As Amanda Brock made clear on the CAIO Connect Podcast, the real question is not whether AI will be open or closed. It is whether it will be transparent enough to earn trust.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
Sign Up for any (or all) of our 25+ Newsletters
You are responsible for reading, understanding, and agreeing to the National Law Review’s (NLR’s) and the National Law Forum LLC’s Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free-to-use, no-log-in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates, or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys, or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.
Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.
Under certain state laws, the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.
The National Law Review – National Law Forum LLC 2070 Green Bay Rd., Suite 178, Highland Park, IL 60035 Telephone (708) 357-3317 or toll-free (877) 357-3317. If you would like to contact us via email please click here.
Copyright ©2026 National Law Forum, LLC
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

