Digital Trends

Creating psychological safety in the AI era – MIT Technology Review

Sponsored
Trust in AI begins when leaders admit what they do not know, address fears, and help people adapt.
In partnership withInfosys Topaz
Rolling out enterprise-grade AI means climbing two steep cliffs at once. First, understanding and implementing the tech itself. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives.
Psychological safety—feeling free to express opinions and take calculated risks without worrying about career repercussions1—is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal. This is nothing short of a necessity when introducing a nascent and profoundly powerful technology that still lacks established best practices.

“Psychological safety is mandatory in this new era of AI,” says Rafee Tarafdar, executive vice president and chief technology officer at Infosys. “The tech itself is evolving so fast—companies have to experiment, and some things will fail. There needs to be a safety net.”

To gauge how psychological safety influences success with enterprise-level AI, MIT Technology Review Insights conducted a survey of 500 business leaders. The findings reveal high self-reported levels of psychological safety, but also suggest that fear still has a foothold. Anecdotally, industry experts highlight a reason for the disconnect between rhetoric and reality: while organizations may promote a safe to experiment message publicly, deeper cultural undercurrents can counteract that intent.
Building psychological safety requires a coordinated, systems-level approach, and human resources (HR) alone cannot deliver such transformation. Instead, enterprises must deeply embed psychological safety into their collaboration processes.

Key findings for this report include:
Download the report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Four ways to think about this year's reckoning
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.

© 2026 MIT Technology Review

source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

Leave a Reply