News Feed

Trust, Responsibility at Core of DOD Approach to AI – Department of Defense

Official websites use .gov
Secure .gov websites use HTTPS

The Defense Department's path toward the adoption of artificial intelligence is guided by trust and responsibility, a senior Pentagon AI official said today.

William Streilein, chief technology officer for DOD's Chief Digital and Artificial Intelligence Office, said his office has launched a department-wide effort focused on understanding how the DOD can accelerate the adoption of generative AI to support the warfighter.
A service member in uniform looks at a computer screen.
Security Forces
An Air Force Tech. Sgt. with the 435th Security Forces Squadron contingency response team, monitors thermal camera output from and unmanned system during a field training demonstration at the Polygone Training Compound in Bann, Germany, Oct. 20, 2023.
Photo By: Air Force Senior Airman Madelyn Keech
VIRIN: 231020-F-FN350-1232C

As part of that effort, known as Task Force Lima, Streilein said his office has identified nearly 200 use cases for how the department could leverage the breakthrough technology across a variety of functions.
"And we're assessing them, we're trying to understand which ones would be appropriate given the state of technology, which is important to acknowledge," Streilein said during a discussion on the role of trusted AI in the DOD hosted by Government Executive, a government-focused publication based in Washington, D.C.
"There is still a lot to learn about it," he said. "It definitely has commercial application, but within the DOD, the consequences are perhaps higher and we need to be responsible in how we leverage it."

Streilein explained that critical importance is establishing trust in each application of the technology, meaning the confidence that the AI algorithm produced the intended result.
"So that means we have to be good with our testing, he said. "We have to be able to specify what we want the algorithms to do, and then can move forward with justified confidence."
He added that in addition to trust, the DOD places special emphasis on key tenants underpinning the ethical principles of AI: responsibility, reliability, equitability, governability and traceability. "Those are actually terms […] that apply to the human in their application of AI," he said. "Meaning that we should always be responsible in our use of AI. We should know how we're applying it, know that we have governance over it, know that we understand how it provided its answer."
An aerial photo of the Pentagon and surrounding buildings.
Photo Shoot
An aerial view of the Pentagon, Washington, D.C., May 15, 2023.
Photo By: Navy Petty Officer 2nd Class Alexander Kubitza, DOD
VIRIN: 230515-D-PM193-1762C

Last month, the DOD released its strategy to accelerate the adoption of advanced artificial intelligence capabilities to ensure U.S. warfighters maintain decision superiority on the battlefield for years to come.
The 2023 Data, Analytics and Artificial Intelligence Adoption Strategy, which was developed by the Chief Digital and AI Office, builds upon and supersedes the 2018 DOD AI Strategy and revised DOD Data Strategy, published in 2020, which have laid the groundwork for the department's approach to fielding AI-enabled capabilities.

The strategy prescribes an agile approach to AI development and application, emphasizing speed of delivery and adoption at scale leading to five specific decision advantage outcomes:
The blueprint also trains the department's focus on several data, analytics and AI-related goals:
Members of U.S. and foreign militaries observe an unmanned naval vessel ashore.
Maritime Forces
Members of Combined Task Force 152 from Combined Maritime Forces are briefed on an unmanned surface vessel in Manama, Bahrain, Jan. 23, 2023.
Photo By: Navy Petty Officer 2nd Class Jacob Vernier
VIRIN: 230123-N-EG592-1087C

As the technology has evolved, the DOD and the broader U.S. government, have been at the forefront of ensuring AI is developed and adopted responsibly.
In January, the Defense Department updated its 2012 directive that governs the responsible development of autonomous weapon systems to the standards aligned with the advances in artificial intelligence.
The U.S. has also introduced a political declaration on the responsible military use of artificial intelligence, which further seeks to codify norms for the responsible use of the technology.
Streilein said those trust and the ethical use of AI underpins the department's experimentation with the technology.
"A lot of what we're doing to understand this technology is to figure out how we can be true to those five principles [of ethical use] in the context of what's happening," he said.

Choose which Defense.gov products you want delivered to your inbox.
The Department of Defense provides the military forces needed to deter war and ensure our nation's security.


This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

Leave a Reply