Here's what you need to know about OpenAI, Google and Anthropic's latest AI moves – Business Insider
It was a busy week in AI, as top companies rolled out new tools, models, and research.
Here’s a look at what happened.
On Tuesday, OpenAI rolled out a native image generation feature in ChatGPT — and the internet immediately jumped on it.
The new tool, powered by the GPT-4o model, allows users to generate images directly in the chatbot without routing through DALL-E.
It became an instant hit, with users transforming real photos into soft-focus, anime-style portraits, often mimicking the look of Studio Ghibli films.
By Wednesday night, users noticed that some prompts referencing Ghibli and other artist styles were being blocked. OpenAI later confirmed it had added a “refusal which triggers when a user attempts to generate an image in the style of a living artist.”
Demand became so strong that OpenAI CEO Sam Altman said temporary rate limits would be introduced while his team worked on making the image feature more efficient.
“It’s super fun seeing people love images in chatgpt. But our GPUs are melting,” Altman wrote. “Chatgpt free tier will get 3 generations per day soon.”
The feature wasn’t without issues, with one user pointing out that the model struggled to render “sexy women.” Altman said on X that it was “a bug” that would be fixed.
Things also took a dark turn as the week progressed.
While OpenAI dominated headlines, Google introduced its Gemini 2.5 on Tuesday — a new family of AI reasoning models designed to “pause” and think before responding.
The first release, Gemini 2.5 Pro Experimental, is a multimodal model built for logic, STEM tasks, coding, and agentic applications. It can process text, audio, images, video, and code.
The model is available to subscribers of the $20-a-month Gemini Advanced plan.
Gemini 2.5 Pro is now *easily* the best model for code.
– it’s extremely powerful
– the 1M token context is legit
– doesn’t just agree with you 24/7
– shows flashes of genuine insight/brilliance
– consistently 1-shots entire tickets
Google delivered a real winner here.
Google says all new Gemini models will include reasoning by default.
On Thursday, Anthropic released the second report from its Economic Index — a project tracking AI’s impact on jobs and the economy.
The report analyzes 1 million anonymized conversations from Anthropic’s Claude 3.7 Sonnet model and maps them to more than 17,000 US job tasks in the Department of Labor’s O*NET database.
It offers a detailed look at how people are using AI at work.
One key takeaway was that “augmentation” appeared to still edge “automation,” making up 57% of usage. In other words, most users aren’t handing work off to AI, but are working with it.
The data also suggested that user interaction with AI differs across professions and tasks. Tasks linked to copywriters and editors showed the highest levels of task iteration — where the human and model write together.
In contrast, tasks associated with translators and interpreters showed the highest reliance on directive use, where the model completes the task with minimal human involvement.
Jump to
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Here’s a look at what happened.
On Tuesday, OpenAI rolled out a native image generation feature in ChatGPT — and the internet immediately jumped on it.
The new tool, powered by the GPT-4o model, allows users to generate images directly in the chatbot without routing through DALL-E.
It became an instant hit, with users transforming real photos into soft-focus, anime-style portraits, often mimicking the look of Studio Ghibli films.
By Wednesday night, users noticed that some prompts referencing Ghibli and other artist styles were being blocked. OpenAI later confirmed it had added a “refusal which triggers when a user attempts to generate an image in the style of a living artist.”
Demand became so strong that OpenAI CEO Sam Altman said temporary rate limits would be introduced while his team worked on making the image feature more efficient.
“It’s super fun seeing people love images in chatgpt. But our GPUs are melting,” Altman wrote. “Chatgpt free tier will get 3 generations per day soon.”
The feature wasn’t without issues, with one user pointing out that the model struggled to render “sexy women.” Altman said on X that it was “a bug” that would be fixed.
Things also took a dark turn as the week progressed.
While OpenAI dominated headlines, Google introduced its Gemini 2.5 on Tuesday — a new family of AI reasoning models designed to “pause” and think before responding.
The first release, Gemini 2.5 Pro Experimental, is a multimodal model built for logic, STEM tasks, coding, and agentic applications. It can process text, audio, images, video, and code.
The model is available to subscribers of the $20-a-month Gemini Advanced plan.
Gemini 2.5 Pro is now *easily* the best model for code.
– it’s extremely powerful
– the 1M token context is legit
– doesn’t just agree with you 24/7
– shows flashes of genuine insight/brilliance
– consistently 1-shots entire tickets
Google delivered a real winner here.
Google says all new Gemini models will include reasoning by default.
On Thursday, Anthropic released the second report from its Economic Index — a project tracking AI’s impact on jobs and the economy.
The report analyzes 1 million anonymized conversations from Anthropic’s Claude 3.7 Sonnet model and maps them to more than 17,000 US job tasks in the Department of Labor’s O*NET database.
It offers a detailed look at how people are using AI at work.
One key takeaway was that “augmentation” appeared to still edge “automation,” making up 57% of usage. In other words, most users aren’t handing work off to AI, but are working with it.
The data also suggested that user interaction with AI differs across professions and tasks. Tasks linked to copywriters and editors showed the highest levels of task iteration — where the human and model write together.
In contrast, tasks associated with translators and interpreters showed the highest reliance on directive use, where the model completes the task with minimal human involvement.
Jump to
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

