Google, Amazon, OpenAI and the race to find alternatives to Nvidia – EL PAÍS English
Efforts to stem large AI developers’ dependence on the company are picking up speed, though the chipmaker’s dominance remains unshaken
The digital era is built atop monopolies. Tech corporations have reached incredible sizes, thanks to their domination various sectors. Google got its start with its search engine, Meta took off via social media. Microsoft came out of Windows and business software, and Amazon is synonymous with e-trade. And now, Nvidia is the largest suppler — by a long shot — of chips for artificial intelligence.
Its GPUs have proven to be the best option for training and running AI models. Developers from OpenAI to Google, Anthropic, Meta and Amazon, along with a host of specialized startups, are fighting to get their hands on Nvidia chips.
That high demand has made the manufacturer the most valuable company in the world. But its clients are hungry for alternatives. Nvidia has a long waiting list, and there’s another factor at play: there is no real competition to keep the price of its chips down. As Intel struggles in the sector, AMD is increasing its presence, and startups like Cerebras and Groq — which, incidentally, Nvidia all but swallowed up in a recent $20 billion deal — have developed specialized processors. It hasn’t been enough.
The biggest names in AI are dedicating more and more resources towards developing their own chips to meet their needs. At the end of last year, two announcements shook up the wildly profitable industry. Amazon debuted its Trainium3 and, most importantly, Google released the first results of its seventh-generation TPUs, Ironwood. Not to mention, the CTO of Microsoft, Kevin Scott, said in October that the company was looking to utilize its own chips for the majority of its data center operations. And OpenAI has come to an agreement with semi-conductor specialist Broadcom to produce its own processors, a project that will begin this year.
“There is a strategic feeling among all their clients and the entire market in general that the dependence on Nvidia, and its prices, needs to be broken,” says Fernando Maldonado, senior analyst at Foundry, who specializes in the technology sector. “All the major cloud service providers are designing their own chips for different tasks. Still, only Google can really compete, and not even in the short-term,” he clarifies. “Further down the line, what could happen is that [Nvidia’s] total market share could shrink a little.”
Google surprised the industry by announcing that its latest model, Gemini 3, which has been praised by experts, was trained solely on its own TPUs. Previous models had been trained using a combination of chips that included Nvidia GPUs. Previously, the startup Anthorpic announced that it would rent a million TPUs from Google to up its own computing capacity, a deal worth tens of billions of dollars. The startup Safe Superintelligence, which was founded by OpenAI founder and former chief scientist Ilya Sutskever, has also committed to using Google chips. And industry gossip suggests a possible Meta agreement to not only rent computing capacity, but also spend billions on purchasing TPUs to be housed in its own data centers.
Jemish Parmar, CTO of the Spanish firm Ideaded, which works on the development of microchips using new materials, thinks that Google processors could present an alternative to Nvidia, particularly for use in carrying out specific tasks. “Some of the jobs that we do through GPUs could be transferred and carried out on TPUs, because the math is more or less the same,” he says. “However, the operations are done differently.”
In truth, the goal of Google and other AI developers that are designing their own processors is not to compete directly with Nvidia. “Microsoft, which also develops its own chips, and Amazon, what they want is to have the capacity that will give them a certain amount of independence,” says Maldonado. “It’s having a tool that will help them in negotiations with Nvidia. The only one that could take a small bite out of Nvidia’s market share is Google.”
The AI accelerator chip market will grow at an annual rate of 16% to reach $604 billion by 2033, according to Bloomberg Intelligence. That’s huge jump from $116 billion in 2024. Of that total, Nvidia will account for 70% to 75%, while AMD will rank second, with 10%. ASIC architecture chips (which are designed for specific tasks) will account for 19% of the market. The latter category includes processors from Google, Amazon, Microsoft, Meta and OpenAI.
“I wouldn’t say that Nvidia’s market is at risk, but the impact will surely be felt. There’s a future where there is an effect on Nividia’s growth of sales and numbers,” says Parmar. “That being said, the company has adapted to all kinds of computing demands that have come its way. And they will be working on the next versions of their GPUs so that they can continue to do so.”
One of the biggest unknowns is whether there will be any impact on the cost of the company’s GPUs. “Nvidia has had and will continue to have monopoly prices,” says Maldonado. “But it may start to think that raising them too much isn’t in its interest, in case it ends up encouraging competitors to enter the market.”
It is often said that the wall that protects Nvidia’s business is its CUDA software ecosystem. The company’s GPUs are general-purpose chips. That means they are very versatile and can be programmed for a wide variety of tasks. CUDA tools make it easy to adapt the chips to individual company’s objectives. There is a wealth of code that has been developed for the system, and AI engineers are accustomed to working with it.
Still, general-purpose chips have their disadvantages. “There’s an energetic cost. Because there are parts of the chip that you have developed, but that you’re not using for the function you need, and they are consuming energy,” says Maldonado. He explains that Google TPUs, for example, are more efficient when it comes to carrying some kinds of concrete assignments. “But data center consumption depends on many things, not just chip efficiency.”
Parmar agrees: “The question [of efficiency] is more on the level of system design. That also includes chip design, but in the end, it is the entire system that handles processing the workload.”
The Ideaded CTO thinks that data centers will become more efficient at handling AI workloads. The key will be a change in energy sources. “I think that the industry will rely on utilizing very clean energy, or having an energy source that does not depend on the existing power system,” he says. But that requires major shifts. The International Energy Agency predicts that data center consumption will double by 2030, going from 1.5% of global consumption to nearly 3%. The United States and China will be responsible for 80% of that increase.
Analyst firm Gartner has cited similar numbers, and estimates that 64% of the increase in energy consumption by data centers will be attributable to AI-optimized servers.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
¿Por qué estás viendo esto?
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
The digital era is built atop monopolies. Tech corporations have reached incredible sizes, thanks to their domination various sectors. Google got its start with its search engine, Meta took off via social media. Microsoft came out of Windows and business software, and Amazon is synonymous with e-trade. And now, Nvidia is the largest suppler — by a long shot — of chips for artificial intelligence.
Its GPUs have proven to be the best option for training and running AI models. Developers from OpenAI to Google, Anthropic, Meta and Amazon, along with a host of specialized startups, are fighting to get their hands on Nvidia chips.
That high demand has made the manufacturer the most valuable company in the world. But its clients are hungry for alternatives. Nvidia has a long waiting list, and there’s another factor at play: there is no real competition to keep the price of its chips down. As Intel struggles in the sector, AMD is increasing its presence, and startups like Cerebras and Groq — which, incidentally, Nvidia all but swallowed up in a recent $20 billion deal — have developed specialized processors. It hasn’t been enough.
The biggest names in AI are dedicating more and more resources towards developing their own chips to meet their needs. At the end of last year, two announcements shook up the wildly profitable industry. Amazon debuted its Trainium3 and, most importantly, Google released the first results of its seventh-generation TPUs, Ironwood. Not to mention, the CTO of Microsoft, Kevin Scott, said in October that the company was looking to utilize its own chips for the majority of its data center operations. And OpenAI has come to an agreement with semi-conductor specialist Broadcom to produce its own processors, a project that will begin this year.
“There is a strategic feeling among all their clients and the entire market in general that the dependence on Nvidia, and its prices, needs to be broken,” says Fernando Maldonado, senior analyst at Foundry, who specializes in the technology sector. “All the major cloud service providers are designing their own chips for different tasks. Still, only Google can really compete, and not even in the short-term,” he clarifies. “Further down the line, what could happen is that [Nvidia’s] total market share could shrink a little.”
Google surprised the industry by announcing that its latest model, Gemini 3, which has been praised by experts, was trained solely on its own TPUs. Previous models had been trained using a combination of chips that included Nvidia GPUs. Previously, the startup Anthorpic announced that it would rent a million TPUs from Google to up its own computing capacity, a deal worth tens of billions of dollars. The startup Safe Superintelligence, which was founded by OpenAI founder and former chief scientist Ilya Sutskever, has also committed to using Google chips. And industry gossip suggests a possible Meta agreement to not only rent computing capacity, but also spend billions on purchasing TPUs to be housed in its own data centers.
Jemish Parmar, CTO of the Spanish firm Ideaded, which works on the development of microchips using new materials, thinks that Google processors could present an alternative to Nvidia, particularly for use in carrying out specific tasks. “Some of the jobs that we do through GPUs could be transferred and carried out on TPUs, because the math is more or less the same,” he says. “However, the operations are done differently.”
In truth, the goal of Google and other AI developers that are designing their own processors is not to compete directly with Nvidia. “Microsoft, which also develops its own chips, and Amazon, what they want is to have the capacity that will give them a certain amount of independence,” says Maldonado. “It’s having a tool that will help them in negotiations with Nvidia. The only one that could take a small bite out of Nvidia’s market share is Google.”
The AI accelerator chip market will grow at an annual rate of 16% to reach $604 billion by 2033, according to Bloomberg Intelligence. That’s huge jump from $116 billion in 2024. Of that total, Nvidia will account for 70% to 75%, while AMD will rank second, with 10%. ASIC architecture chips (which are designed for specific tasks) will account for 19% of the market. The latter category includes processors from Google, Amazon, Microsoft, Meta and OpenAI.
“I wouldn’t say that Nvidia’s market is at risk, but the impact will surely be felt. There’s a future where there is an effect on Nividia’s growth of sales and numbers,” says Parmar. “That being said, the company has adapted to all kinds of computing demands that have come its way. And they will be working on the next versions of their GPUs so that they can continue to do so.”
One of the biggest unknowns is whether there will be any impact on the cost of the company’s GPUs. “Nvidia has had and will continue to have monopoly prices,” says Maldonado. “But it may start to think that raising them too much isn’t in its interest, in case it ends up encouraging competitors to enter the market.”
It is often said that the wall that protects Nvidia’s business is its CUDA software ecosystem. The company’s GPUs are general-purpose chips. That means they are very versatile and can be programmed for a wide variety of tasks. CUDA tools make it easy to adapt the chips to individual company’s objectives. There is a wealth of code that has been developed for the system, and AI engineers are accustomed to working with it.
Still, general-purpose chips have their disadvantages. “There’s an energetic cost. Because there are parts of the chip that you have developed, but that you’re not using for the function you need, and they are consuming energy,” says Maldonado. He explains that Google TPUs, for example, are more efficient when it comes to carrying some kinds of concrete assignments. “But data center consumption depends on many things, not just chip efficiency.”
Parmar agrees: “The question [of efficiency] is more on the level of system design. That also includes chip design, but in the end, it is the entire system that handles processing the workload.”
The Ideaded CTO thinks that data centers will become more efficient at handling AI workloads. The key will be a change in energy sources. “I think that the industry will rely on utilizing very clean energy, or having an energy source that does not depend on the existing power system,” he says. But that requires major shifts. The International Energy Agency predicts that data center consumption will double by 2030, going from 1.5% of global consumption to nearly 3%. The United States and China will be responsible for 80% of that increase.
Analyst firm Gartner has cited similar numbers, and estimates that 64% of the increase in energy consumption by data centers will be attributable to AI-optimized servers.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
¿Por qué estás viendo esto?
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

