News Feed

Will AI Save Or Harm Us? 3 Ethical Challenges For Businesses In 2025 – Forbes

Will we meet the ethical challenges posed by AI or succumb to them?
Whether AI is helpful or harmful will be the top challenge facing businesses in 2025. Much of the news surrounding artificial intelligence focuses on technological issues, such as how to build faster computers, make AI programs work more efficiently, and get new AI tools to work with older technology.
Unless we use technology for good purposes and guard against abuse, however, it doesn’t matter how technically sophisticated AI becomes. The harms will overshadow the advances.
Let’s consider three ethical challenges that AI raises for businesses around the world. We’ll also look at how some businesses address these challenges and what all of this means for you and your own organization.
Do No Harm isn’t just for physicians and nurses!
The most fundamental ethical principle of all is “Do no harm.” It applies not just to physicians and other health care workers but to leaders in the AI space and everybody else.
AI safety is not optional. It is an urgent necessity. Speaking at the AI Safety Summit in November 2023, Dario Amodei, CEO of Anthropic, emphasized the importance of ongoing risk assessment and response. “We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur,” he said.
Although the summit took place in 2023, Amodei’s insights remain critical for 2025. The challenges he outlined—establishing rigorous evaluations and proactive response mechanisms—are foundational principles for managing AI risks that continue to grow.
Some of the grievous harms AI can cause include:
The company Anthropic, cited above, uses the process called “red teaming” to stress-test their AI systems. This means simulating adversarial attacks to identify weaknesses like biased outputs or harmful behaviors. Red teaming helps to ensure AI models are safe, reliable, and resilient before a company uses them.
By prioritizing safety over speed, delaying product launches when necessary, and collaborating with regulators to establish industry-wide safety standards, companies like Anthropic demonstrate how rigorous testing can build trust and prevent harmful outcomes.
How can you prioritize safety in AI development without sacrificing innovation or speed?
“If you don’t manage your time, someone else will,” goes a saying in time management. We could … [+] update that to: If you don’t manage your AI, someone else must!
When I began my career at the West Virginia University Health Sciences Center in Morgantown, I took a seminar in time management. The instructor, law professor Forest “Jack” Bowman, told us, “If you don’t manage your time, someone else will.”
That wise saying could be updated to: “If you don’t manage your AI systems, someone else must.”
At a conference sponsored by Reuters yesterday, Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, highlighted the challenges policymakers face in developing AI safeguards because of the rapidly evolving nature of the technology.
Kelly noted that in areas like cybersecurity, it can be easy to bypass AI security measures. These workarounds are known as “jailbreaks” and can be easily executed by tech-savvy people.
Recall, in David Fincher’s The Girl with the Dragon Tattoo, the look of disbelief that Rooney Mara’s hacker Lisbeth Salander (Rooney Mara) gives Mikael Blomkvist (Daniel Craig) when he asks her about the difficulty of breaking into a computer system. And that was in 2011! (Written by Steven Zaillian, the film was based on the novel by Stieg Larsson.)
The European Union is one part of the world that is tackling the need for government regulation of AI systems. Its Artificial Intelligence Act (AI Act), which went into effect on August 1 this year, bans AI with unacceptable risks, like social scoring, in which individuals are given scores based on their behavior and actions. Social scoring can unfairly limit access to financial services, employment, travel, education, housing, and public benefits.
IBM has already taken proactive steps to address the concerns of the EU’s legislation through initiatives such as its Precision Regulation Policy. That policy addresses three components of AI ethics: 1) accountability, 2) transparency, and 3) fairness.
It’s worth taking a look at this document, because it presents a blueprint for how any company, and not just IBM, can use AI in the right way and for the right reasons.
What is your company doing to align your AI systems with emerging regulations and thus avoid potential legal or ethical risks?
The threat to jobs that AI poses is real but not insurmountable.
Earlier we considered the ethical principle Do No Harm with respect to safety. That fundamental ethical imperative also applies to employment. Whatever euphemism you wish to use—reduction in force, downsizing—the effect is the same: letting loyal, hardworking employees go causes harm, even if there are financial benefits for the companies that do this.
Andrew Yang, former presidential candidate and founder of the Forward Party, has been a vocal advocate on this issue. “The IMF [International Monetary Fund] said that about 40 percent of global jobs could be affected,” he noted earlier this year. “That’s hundreds of millions of workers around the world.”
In response to these concerns, some companies are forging mutually beneficial relationships with nonprofit organizations. “Nonprofits can often connect businesses with underrepresented talent in the knowledge workforce,” writes Cognizant Chief People Officer Kathy Diaz in an article for the World Economic Forum. “The IT Senior Management Forum is one of the many nonprofits leading the way in this area.”
How can your organization ensure both technological advancement and job security with respect to its use of AI?
Now hear (or see) this!
In 2025, businesses will have to answer the crucial question, “How can we use AI for good and prevent abuse?” If your organization takes this question seriously, you will go a long way toward ensuring that your own AI systems don’t wind up like HAL 9000 from 2001: A Space Odyssey and become humanity’s worst nightmare.

One Community. Many Voices. Create a free account to share your thoughts. 
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
In order to do so, please follow the posting rules in our site’s Terms of Service.  We’ve summarized some of those key rules below. Simply put, keep it civil.
Your post will be rejected if we notice that it seems to contain:
User accounts will be blocked if we notice or believe that users are engaged in:
So, how can you be a power user?
Thanks for reading our community guidelines. Please read the full list of posting rules found in our site’s Terms of Service.

source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

Leave a Reply