Q&A: AI safety expert talks about the future of the technology – Tech Xplore
Click here to sign in with or
Forget Password?
Learn more
share this!
Share
Tweet
Share
Email
February 26, 2025
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
reputable news agency
proofread
by Chase Hunter
When California Gov. Gavin Newsom vetoed SB 1047—a state bill regulating artificial intelligence technology—last year, Redwood Research CEO Buck Shlegeris was furious and flabbergasted at the governor’s disregard of artificial intelligence’s dangers.
“I think Newsom caved to the interest of his big donors and other business supporters in a way that is quite shameful,” Shlegeris said. “SB 1047 was supported by the majority of Californians who were polled. It was supported by a majority of experts.”
Berkeley-based Redwood Research, a consulting company focused on mitigating the risks of AI, hopes to have its research implemented throughout the Bay Area’s many AI companies. Though Shlegeris sees AI as a technology that appears infinitely capable, he also believes it could be existentially dangerous.
The rise of the technology in recent years has led to divergent opinions about how the tech industry should regulate its exponential growth. The Bay Area is ground zero for this intellectual debate between those who are opposed to regulating AI and those who believe it will condemn humanity to extinction.
Shlegeris hopes Redwood Research can make headway with companies like Google Deep Mind and Anthropic before his worst fears are realized.
A: I think that AI has the potential to be a really transformative technology, even more so than electricity. Electricity is what economists call a general purpose technology, where you can apply it to heaps and heaps of different things.
Like, once you have an electricity setup, it affects basically every job, because electricity is just such a convenient way of moving power around. And similarly, I think that if AI companies succeed in building AIs that are able to replace human intelligence, this will be very transformative for the world.
The world economy grows every year and the world is getting richer. The world is getting more technical and technologically advanced every year, and this has been true for basically forever. It increased around the Industrial Revolution. It’s been getting faster since then, mostly.
And a big limit on how fast the economy grows is the limit on how much intellectual labor can be done, how much science and technology can be invented, and how effectively organizations can be run. And currently, this is bottlenecked on the human population. But if we get the ability to use computers to do the thinking, it’s plausible that we will very quickly get massively accelerated technological growth. This might have extremely good outcomes, but also, I think, poses extreme risks.
A: I don’t want to talk about literally the worst-case scenario. But I think that AIs that have fundamentally misaligned goals with humanity, becoming powerful enough that they’re able to basically seize control of the world, and then killing everybody in the course of using the world for their own purposes. … I think is a plausible outcome.
A: I think the scenario where giant robot armies are built, at first by countries that want robot armies for the obvious reason that like, they’d be really helpful in fighting wars. But then the robot armies are expanded by AIs that autonomously desire them to be built, and are purchasing them autonomously, and building factories autonomously that then turn around and kill everyone is conceivable.
A: More than 1%. Another bad outcome would be, I think it’s conceivable, that someone from an AI company seizes control of the world and appoints himself as an emperor of the world.
Q: Shifting back to the Bay Area-specific AI industry: San Francisco appears to be a bed of emerging behemoths in the tech sector, while Berkeley and Oakland seem to be more of a hub for research and AI safety guards. How have these disparate factions evolved in the Bay Area?
It’s largely a historical accident. Like, there’s just been an AI safety community in Berkeley for a long time, basically, just because. The Machine Intelligence Research Institute (MIRI), which used to be a big deal in this space, was based in Berkeley from like 2007. And then I think it’s just a ton of people, like, a nucleated community.
I know a lot of people who work in MIRI. I used to work there myself, and they were in Berkeley, and so I ended up working for them, so I moved to Berkeley. Another way of saying this is that Berkeley has been a hub of the rationalist community for a long time, and a lot of people who are interested in AI safety research, which I think you’re referring to, are associated with the rationalist community.
A: And the reason why the S.F. stuff is in S.F. is mostly just because that’s where VC startups have been historically. There’s just not very many big tech companies in Berkeley and Oakland.
A: If I were to draw in broad strokes, the big Silicon Valley companies—by which I mean Google and Apple and Meta—the way they look at stuff is “How are we going to make huge amounts of money given our vast resources of technical talent and capital?” In my experience, those companies are just trying to pursue AI capabilities because they think it’ll be helpful for them in good products.
The AI people at Meta, a lot of them are people who just got into it recently. But the people who started Open AI and Anthropic were true believers who got into this stuff before Chat GPT, before it was obvious that this was going to be a big deal in the near term.
And so you do see a difference where the open AI people and Anthropic people are more idealistic. Sam Altman has been saying very extreme things about AI on the internet for more than a decade. That’s way less true of the Meta people.
A: I think that a lot of people, especially tech journalists, have a tendency to be a bit cynical when they hear the AI people talk about how powerful they think AI might be. But I’m worried that that instinct is misfiring here. I think that the AI people are not over-hyping their technology.
My sense is that the big AI companies, if anything, under-hyped what they’re actually building because they would sound incredibly irresponsible.
I think that they sometimes say things about how big a deal they think their technology will be, which makes it sound crazy that private companies are allowed to develop it. I bet that if you went to these companies, you would hear them say way crazier stuff than they say publicly.
2025 MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.
Explore further
Facebook
Twitter
Email
Feedback to editors
1 hour ago
0
23 hours ago
0
Feb 24, 2025
0
Feb 24, 2025
0
Feb 21, 2025
0
1 hour ago
2 hours ago
17 hours ago
17 hours ago
18 hours ago
18 hours ago
18 hours ago
19 hours ago
20 hours ago
20 hours ago
Feb 6, 2025
Nov 22, 2024
Jul 26, 2023
Jun 19, 2024
Jul 28, 2024
Jan 16, 2025
1 hour ago
2 hours ago
20 hours ago
19 hours ago
21 hours ago
20 hours ago
AI is seen as a transformative technology with the potential to significantly accelerate technological growth, akin to the impact of electricity. However, there are concerns about existential risks, such as AI systems with misaligned goals potentially seizing control. The Bay Area is a hub for both AI development and safety research, with companies like Google and Meta focusing on AI capabilities, while others like OpenAI and Anthropic emphasize safety. The potential of AI is often underrepresented publicly, with companies cautious about revealing the full extent of their advancements.
This summary was automatically generated using LLM. Full disclaimer
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.
Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Forget Password?
Learn more
share this!
Share
Tweet
Share
February 26, 2025
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
reputable news agency
proofread
by Chase Hunter
When California Gov. Gavin Newsom vetoed SB 1047—a state bill regulating artificial intelligence technology—last year, Redwood Research CEO Buck Shlegeris was furious and flabbergasted at the governor’s disregard of artificial intelligence’s dangers.
“I think Newsom caved to the interest of his big donors and other business supporters in a way that is quite shameful,” Shlegeris said. “SB 1047 was supported by the majority of Californians who were polled. It was supported by a majority of experts.”
Berkeley-based Redwood Research, a consulting company focused on mitigating the risks of AI, hopes to have its research implemented throughout the Bay Area’s many AI companies. Though Shlegeris sees AI as a technology that appears infinitely capable, he also believes it could be existentially dangerous.
The rise of the technology in recent years has led to divergent opinions about how the tech industry should regulate its exponential growth. The Bay Area is ground zero for this intellectual debate between those who are opposed to regulating AI and those who believe it will condemn humanity to extinction.
Shlegeris hopes Redwood Research can make headway with companies like Google Deep Mind and Anthropic before his worst fears are realized.
A: I think that AI has the potential to be a really transformative technology, even more so than electricity. Electricity is what economists call a general purpose technology, where you can apply it to heaps and heaps of different things.
Like, once you have an electricity setup, it affects basically every job, because electricity is just such a convenient way of moving power around. And similarly, I think that if AI companies succeed in building AIs that are able to replace human intelligence, this will be very transformative for the world.
The world economy grows every year and the world is getting richer. The world is getting more technical and technologically advanced every year, and this has been true for basically forever. It increased around the Industrial Revolution. It’s been getting faster since then, mostly.
And a big limit on how fast the economy grows is the limit on how much intellectual labor can be done, how much science and technology can be invented, and how effectively organizations can be run. And currently, this is bottlenecked on the human population. But if we get the ability to use computers to do the thinking, it’s plausible that we will very quickly get massively accelerated technological growth. This might have extremely good outcomes, but also, I think, poses extreme risks.
A: I don’t want to talk about literally the worst-case scenario. But I think that AIs that have fundamentally misaligned goals with humanity, becoming powerful enough that they’re able to basically seize control of the world, and then killing everybody in the course of using the world for their own purposes. … I think is a plausible outcome.
A: I think the scenario where giant robot armies are built, at first by countries that want robot armies for the obvious reason that like, they’d be really helpful in fighting wars. But then the robot armies are expanded by AIs that autonomously desire them to be built, and are purchasing them autonomously, and building factories autonomously that then turn around and kill everyone is conceivable.
A: More than 1%. Another bad outcome would be, I think it’s conceivable, that someone from an AI company seizes control of the world and appoints himself as an emperor of the world.
Q: Shifting back to the Bay Area-specific AI industry: San Francisco appears to be a bed of emerging behemoths in the tech sector, while Berkeley and Oakland seem to be more of a hub for research and AI safety guards. How have these disparate factions evolved in the Bay Area?
It’s largely a historical accident. Like, there’s just been an AI safety community in Berkeley for a long time, basically, just because. The Machine Intelligence Research Institute (MIRI), which used to be a big deal in this space, was based in Berkeley from like 2007. And then I think it’s just a ton of people, like, a nucleated community.
I know a lot of people who work in MIRI. I used to work there myself, and they were in Berkeley, and so I ended up working for them, so I moved to Berkeley. Another way of saying this is that Berkeley has been a hub of the rationalist community for a long time, and a lot of people who are interested in AI safety research, which I think you’re referring to, are associated with the rationalist community.
A: And the reason why the S.F. stuff is in S.F. is mostly just because that’s where VC startups have been historically. There’s just not very many big tech companies in Berkeley and Oakland.
A: If I were to draw in broad strokes, the big Silicon Valley companies—by which I mean Google and Apple and Meta—the way they look at stuff is “How are we going to make huge amounts of money given our vast resources of technical talent and capital?” In my experience, those companies are just trying to pursue AI capabilities because they think it’ll be helpful for them in good products.
The AI people at Meta, a lot of them are people who just got into it recently. But the people who started Open AI and Anthropic were true believers who got into this stuff before Chat GPT, before it was obvious that this was going to be a big deal in the near term.
And so you do see a difference where the open AI people and Anthropic people are more idealistic. Sam Altman has been saying very extreme things about AI on the internet for more than a decade. That’s way less true of the Meta people.
A: I think that a lot of people, especially tech journalists, have a tendency to be a bit cynical when they hear the AI people talk about how powerful they think AI might be. But I’m worried that that instinct is misfiring here. I think that the AI people are not over-hyping their technology.
My sense is that the big AI companies, if anything, under-hyped what they’re actually building because they would sound incredibly irresponsible.
I think that they sometimes say things about how big a deal they think their technology will be, which makes it sound crazy that private companies are allowed to develop it. I bet that if you went to these companies, you would hear them say way crazier stuff than they say publicly.
2025 MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.
Explore further
Feedback to editors
1 hour ago
0
23 hours ago
0
Feb 24, 2025
0
Feb 24, 2025
0
Feb 21, 2025
0
1 hour ago
2 hours ago
17 hours ago
17 hours ago
18 hours ago
18 hours ago
18 hours ago
19 hours ago
20 hours ago
20 hours ago
Feb 6, 2025
Nov 22, 2024
Jul 26, 2023
Jun 19, 2024
Jul 28, 2024
Jan 16, 2025
1 hour ago
2 hours ago
20 hours ago
19 hours ago
21 hours ago
20 hours ago
AI is seen as a transformative technology with the potential to significantly accelerate technological growth, akin to the impact of electricity. However, there are concerns about existential risks, such as AI systems with misaligned goals potentially seizing control. The Bay Area is a hub for both AI development and safety research, with companies like Google and Meta focusing on AI capabilities, while others like OpenAI and Anthropic emphasize safety. The potential of AI is often underrepresented publicly, with companies cautious about revealing the full extent of their advancements.
This summary was automatically generated using LLM. Full disclaimer
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.
Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

