OpenAI announces new safety committee with Sam Altman, Bret Taylor, John Schulman, and others – Axios
Illustration: Sarah Grillo/Axios
OpenAI announced Tuesday that it's establishing a new safety committee, and also confirmed that it has begun training its next big model.
Why it matters: The company has seen a number of key departures in recent weeks, with several employees complaining that it has not been devoting promised resources to ensuring the long-term safety of its AI work.
Driving the news: OpenAI says it has established a new safety and security committee to be led by outside chairman Bret Taylor along with board members Adam D'Angelo, Nicole Seligman and Sam Altman.
OpenAI also used Tuesday's announcement to officially confirm that it has started training its next big large language model, although recent comments from both Microsoft and OpenAI suggested this was already taking place.
Context: OpenAI's moves come after the resignations of co-founder Ilya Sutskever and Jan Leike, who together led the company's long-term safety work, dubbed "superalignment." Leike criticized OpenAI for not supporting the work of his superalignment team in a thread he posted announcing his departure.
Between the lines: OpenAI is clearly trying to reassure the world that it's taking its security responsibilities seriously and not ignoring recent criticism.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
OpenAI announced Tuesday that it's establishing a new safety committee, and also confirmed that it has begun training its next big model.
Why it matters: The company has seen a number of key departures in recent weeks, with several employees complaining that it has not been devoting promised resources to ensuring the long-term safety of its AI work.
Driving the news: OpenAI says it has established a new safety and security committee to be led by outside chairman Bret Taylor along with board members Adam D'Angelo, Nicole Seligman and Sam Altman.
OpenAI also used Tuesday's announcement to officially confirm that it has started training its next big large language model, although recent comments from both Microsoft and OpenAI suggested this was already taking place.
Context: OpenAI's moves come after the resignations of co-founder Ilya Sutskever and Jan Leike, who together led the company's long-term safety work, dubbed "superalignment." Leike criticized OpenAI for not supporting the work of his superalignment team in a thread he posted announcing his departure.
Between the lines: OpenAI is clearly trying to reassure the world that it's taking its security responsibilities seriously and not ignoring recent criticism.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

