OpenAI Says It Has Begun Training a New Flagship A.I. Model – The New York Times
Artificial Intelligence
Advertisement
Supported by
The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.’s risks.
Reporting from San Francisco
OpenAI said on Tuesday that it had begun training a new flagship artificial intelligence model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
The San Francisco start-up, which is one of the world’s leading A.I. companies, said in a blog post that it expected the new model to bring “the next level of capabilities” as it strove to build “artificial general intelligence,” or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple’s Siri, search engines and image generators.
OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company said.
OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.
OpenAI’s GPT-4, which was released in March 2023, enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.
Days after OpenAI showed the updated version — called GPT-4o — the actress Scarlett Johansson said it used a voice that sounded “eerily similar to mine.” She said that she had declined efforts by OpenAI’s chief executive, Sam Altman, to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using the voice. The company said the voice was not Ms. Johansson’s.
Technologies like GPT-4o learn their skills by analyzing vast amounts of digital data, including sounds, photos, videos, Wikipedia articles, books and news articles. The New York Times sued OpenAI and Microsoft in December, claiming copyright infringement of news content related to A.I. systems.
Digital “training” of A.I. models can take months or even years. Once the training is completed, A.I. companies typically spend several more months testing the technology and fine-tuning it for public use.
That could mean that OpenAI’s next model will not arrive for another nine months to a year or more.
As OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said. The committee includes Mr. Altman, as well as the OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman. The company said the new policies could be in place in the late summer or fall.
This month, OpenAI said Ilya Sutskever, a co-founder and one of the leaders of its safety efforts, was leaving the company. This caused concern that OpenAI was not grappling enough with the dangers posed by A.I.
Dr. Sutskever had joined three other board members in November to remove Mr. Altman from OpenAI, saying Mr. Altman could no longer be trusted with the company’s plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Mr. Altman’s allies, he was reinstated five days later and has since reasserted control over the company.
Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways of ensuring that future A.I. models would not do harm. Like others in the field, he had grown increasingly concerned that A.I. posed a threat to humanity.
Jan Leike, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team’s future in doubt.
OpenAI has folded its long-term safety research into its larger efforts to ensure that its technologies are safe. That work will be led by John Schulman, another co-founder, who previously headed the team that created ChatGPT. The new safety committee will oversee Dr. Schulman’s research and provide guidance for how the company will address technological risks.
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz
News and Analysis
OpenAI said that it has begun training a new flagship A.I. model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
Elon Musk’s A.I. company, xAI, said that it had raised $6 billion, helping to close the funding gap with OpenAI, Anthropic and other rivals.
Google’s A.I. capabilities that answer people’s questions have generated a litany of untruths and errors — including recommending glue as part of a pizza recipe and the ingesting of rocks for nutrients — causing a furor online.
The Age of A.I.
After some trying years during which Mark Zuckerberg could do little right, many developers and technologists have embraced the Meta chief as their champion of “open-source” A.I.
D’Youville University in Buffalo had an A.I. robot speak at its commencement. Not everyone was happy about it.
A new program, backed by Cornell Tech, M.I.T. and U.C.L.A., helps prepare lower-income, Latina and Black female computing majors for A.I. careers.
Publishers have long worried that A.I.-generated answers on Google would drive readers away from their sites. They’re about to find out if those fears are warranted, our tech columnist writes.
Advertisement
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Advertisement
Supported by
The advanced A.I. system would succeed GPT-4, which powers ChatGPT. The company has also created a new safety committee to address A.I.’s risks.
Reporting from San Francisco
OpenAI said on Tuesday that it had begun training a new flagship artificial intelligence model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
The San Francisco start-up, which is one of the world’s leading A.I. companies, said in a blog post that it expected the new model to bring “the next level of capabilities” as it strove to build “artificial general intelligence,” or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple’s Siri, search engines and image generators.
OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company said.
OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.
OpenAI’s GPT-4, which was released in March 2023, enables chatbots and other software apps to answer questions, write emails, generate term papers and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.
Days after OpenAI showed the updated version — called GPT-4o — the actress Scarlett Johansson said it used a voice that sounded “eerily similar to mine.” She said that she had declined efforts by OpenAI’s chief executive, Sam Altman, to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using the voice. The company said the voice was not Ms. Johansson’s.
Technologies like GPT-4o learn their skills by analyzing vast amounts of digital data, including sounds, photos, videos, Wikipedia articles, books and news articles. The New York Times sued OpenAI and Microsoft in December, claiming copyright infringement of news content related to A.I. systems.
Digital “training” of A.I. models can take months or even years. Once the training is completed, A.I. companies typically spend several more months testing the technology and fine-tuning it for public use.
That could mean that OpenAI’s next model will not arrive for another nine months to a year or more.
As OpenAI trains its new model, its new Safety and Security committee will work to hone policies and processes for safeguarding the technology, the company said. The committee includes Mr. Altman, as well as the OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman. The company said the new policies could be in place in the late summer or fall.
This month, OpenAI said Ilya Sutskever, a co-founder and one of the leaders of its safety efforts, was leaving the company. This caused concern that OpenAI was not grappling enough with the dangers posed by A.I.
Dr. Sutskever had joined three other board members in November to remove Mr. Altman from OpenAI, saying Mr. Altman could no longer be trusted with the company’s plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Mr. Altman’s allies, he was reinstated five days later and has since reasserted control over the company.
Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways of ensuring that future A.I. models would not do harm. Like others in the field, he had grown increasingly concerned that A.I. posed a threat to humanity.
Jan Leike, who ran the Superalignment team with Dr. Sutskever, resigned from the company this month, leaving the team’s future in doubt.
OpenAI has folded its long-term safety research into its larger efforts to ensure that its technologies are safe. That work will be led by John Schulman, another co-founder, who previously headed the team that created ChatGPT. The new safety committee will oversee Dr. Schulman’s research and provide guidance for how the company will address technological risks.
Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz
News and Analysis
OpenAI said that it has begun training a new flagship A.I. model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.
Elon Musk’s A.I. company, xAI, said that it had raised $6 billion, helping to close the funding gap with OpenAI, Anthropic and other rivals.
Google’s A.I. capabilities that answer people’s questions have generated a litany of untruths and errors — including recommending glue as part of a pizza recipe and the ingesting of rocks for nutrients — causing a furor online.
The Age of A.I.
After some trying years during which Mark Zuckerberg could do little right, many developers and technologists have embraced the Meta chief as their champion of “open-source” A.I.
D’Youville University in Buffalo had an A.I. robot speak at its commencement. Not everyone was happy about it.
A new program, backed by Cornell Tech, M.I.T. and U.C.L.A., helps prepare lower-income, Latina and Black female computing majors for A.I. careers.
Publishers have long worried that A.I.-generated answers on Google would drive readers away from their sites. They’re about to find out if those fears are warranted, our tech columnist writes.
Advertisement
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

