When Google questioned her research Timnit Gebru didn't back down – The Business Journals
Listen to this article 4 min
"Unfortunately, AI is not magic." Former Google AI researcher Timnit Gebru has challenged bias in artificial intelligence and big tech. Now, at the Distributed AI Research Institute (DAIR) she's mitigating the harms of current AI systems and helping to build better future ones.
Timnit Gebru is a champion of AI ethics.
She’s driven groundbreaking research at some of the biggest companies in the world and is uniquely placed to critique the dangers of AI bias and the companies best placed to regulate it.
In 2018, Gebru was hired by Google to colead its ethical AI team. In December 2020,Gebru announced she had been fired. In between she had coauthored a paper highlighting several risks associated with large language models. Gebru’s departure saw close to 2,000 Google employees and supporters sign a note defending her.
In 2021, she founded the Distributed AI Research Institute, an interdisciplinary nonprofit group working to “raise awareness about the current harms of AI systems” and promote research outside the influence of big tech.
What was DAIR founded to do? I founded DAIR about six months after I got fired from Google for writing a paper about the biases in the AI systems that they were building. The only thing I could think about building was a completely different space to do research in AI and trying to make it free from the exploitation of people, from the exploitation of resources, that I was seeing, and make it such that people around the world could give input and shape the way in which technology is built.
What did you want to achieve when you started working at Google in 2018? To make a little bit of a difference. I wasn’t naive enough to think that I could steer that big ship, but I wanted to have a space for people to safely work on understanding the harms of AI systems and try to forge a different path. That was what I was trying to do, and that’s what got me fired.
Can you describe your departure from Google? I was the co-lead of the ethical AI team. Our goal was to literally uncover the harms of AI systems and try to forge another path. So we would write research papers on these topics. We would advise other teams, we would have lots of product teams or other researchers coming to us, asking us for advice.
We wrote a paper called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Language models are the underpinnings of things like ChatGPT. So people at Google were asking us how to approach them. We decided it was better to write a paper that everybody could use, but the company didn’t like that. So that ended my tenure and my colead’s tenure
How stressful was that period of your life? It was exhausting, and it was a 24/7 job, trying to get my story out, trying to protect my team. I lost at least 15 pounds in two weeks. I wasn’t eating very much. I was becoming a skeleton. There was a cover photo of me in Wired in May 2021, my mother said she didn’t like it because I was so gaunt.
I think if it happened to me right now, I don’t really know if I would be able to deal with it, because of how taxing it was to do. That’s literally why I started DAIR, because I wanted to be able to do that kind of work and provide a safe space for people to do that kind of work without having these kinds of repercussions.
Should we worry that the companies driving AI tech are also the biggest companies in the world? There was an AI Insight Forum (organized) by Sen. Chuck Schumer, and it was Mark Zuckerberg, Elon Musk and all of these billionaires. How do you have lawmakers, who are supposed to prioritize citizens’ rights, (hold) a legislative forum that is filled with CEOs of multinational corporations?
Why did you start Black in AI in 2017 and why does it remain important? I was very, very worried about the state of the field of AI. I would go to these international conferences and I would literally count five or six Black people out of 6,000 people.When I started it, I said that I didn’t want the organization to have to exist. Actually, it would be wonderful if it didn’t need to exist, because that means that those problems are solved. But obviously they’re not. So the organization continues to exist and provide a safe space for many Black practitioners and researchers to work on in the field of AI.
How do you feel about the future? There’s no predetermined path, it’s not like the law of gravity waiting to be discovered by someone. Tech can have many possible futures. The one that we’re in right now is the imagination of a few really, really rich people that we handed a lot of power and resources to. I don’t want to buy into this inevitability narrative.
You’ve not been afraid to speak truth to power in your career, If you had that opportunity right now what would you say? Unfortunately, all of the things we warned about in 2020 are coming true. The fact is that the propaganda of these big tech companies and others like OpenAI, which is a California nonprofit, and I have no idea how that’s a possibility, is being taken seriously by our lawmakers, who should prioritize citizens’ rights rather tha fanboying corporations.
I’d like to take this opportunity to tell not just lawmakers, but businesses: Don’t believe the hype. This is just another bubble. Separate hype from reality and don’t pour resources into something that sounds like magic, because there is no magic. Unfortunately, AI is not magic.
Resume: Black in AI, Google AI, Microsoft, Apple
© 2024 American City Business Journals. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated August 13, 2024) and Privacy Policy (updated July 3, 2024). The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of American City Business Journals.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
"Unfortunately, AI is not magic." Former Google AI researcher Timnit Gebru has challenged bias in artificial intelligence and big tech. Now, at the Distributed AI Research Institute (DAIR) she's mitigating the harms of current AI systems and helping to build better future ones.
Timnit Gebru is a champion of AI ethics.
She’s driven groundbreaking research at some of the biggest companies in the world and is uniquely placed to critique the dangers of AI bias and the companies best placed to regulate it.
In 2018, Gebru was hired by Google to colead its ethical AI team. In December 2020,Gebru announced she had been fired. In between she had coauthored a paper highlighting several risks associated with large language models. Gebru’s departure saw close to 2,000 Google employees and supporters sign a note defending her.
In 2021, she founded the Distributed AI Research Institute, an interdisciplinary nonprofit group working to “raise awareness about the current harms of AI systems” and promote research outside the influence of big tech.
What was DAIR founded to do? I founded DAIR about six months after I got fired from Google for writing a paper about the biases in the AI systems that they were building. The only thing I could think about building was a completely different space to do research in AI and trying to make it free from the exploitation of people, from the exploitation of resources, that I was seeing, and make it such that people around the world could give input and shape the way in which technology is built.
What did you want to achieve when you started working at Google in 2018? To make a little bit of a difference. I wasn’t naive enough to think that I could steer that big ship, but I wanted to have a space for people to safely work on understanding the harms of AI systems and try to forge a different path. That was what I was trying to do, and that’s what got me fired.
Can you describe your departure from Google? I was the co-lead of the ethical AI team. Our goal was to literally uncover the harms of AI systems and try to forge another path. So we would write research papers on these topics. We would advise other teams, we would have lots of product teams or other researchers coming to us, asking us for advice.
We wrote a paper called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Language models are the underpinnings of things like ChatGPT. So people at Google were asking us how to approach them. We decided it was better to write a paper that everybody could use, but the company didn’t like that. So that ended my tenure and my colead’s tenure
How stressful was that period of your life? It was exhausting, and it was a 24/7 job, trying to get my story out, trying to protect my team. I lost at least 15 pounds in two weeks. I wasn’t eating very much. I was becoming a skeleton. There was a cover photo of me in Wired in May 2021, my mother said she didn’t like it because I was so gaunt.
I think if it happened to me right now, I don’t really know if I would be able to deal with it, because of how taxing it was to do. That’s literally why I started DAIR, because I wanted to be able to do that kind of work and provide a safe space for people to do that kind of work without having these kinds of repercussions.
Should we worry that the companies driving AI tech are also the biggest companies in the world? There was an AI Insight Forum (organized) by Sen. Chuck Schumer, and it was Mark Zuckerberg, Elon Musk and all of these billionaires. How do you have lawmakers, who are supposed to prioritize citizens’ rights, (hold) a legislative forum that is filled with CEOs of multinational corporations?
Why did you start Black in AI in 2017 and why does it remain important? I was very, very worried about the state of the field of AI. I would go to these international conferences and I would literally count five or six Black people out of 6,000 people.When I started it, I said that I didn’t want the organization to have to exist. Actually, it would be wonderful if it didn’t need to exist, because that means that those problems are solved. But obviously they’re not. So the organization continues to exist and provide a safe space for many Black practitioners and researchers to work on in the field of AI.
How do you feel about the future? There’s no predetermined path, it’s not like the law of gravity waiting to be discovered by someone. Tech can have many possible futures. The one that we’re in right now is the imagination of a few really, really rich people that we handed a lot of power and resources to. I don’t want to buy into this inevitability narrative.
You’ve not been afraid to speak truth to power in your career, If you had that opportunity right now what would you say? Unfortunately, all of the things we warned about in 2020 are coming true. The fact is that the propaganda of these big tech companies and others like OpenAI, which is a California nonprofit, and I have no idea how that’s a possibility, is being taken seriously by our lawmakers, who should prioritize citizens’ rights rather tha fanboying corporations.
I’d like to take this opportunity to tell not just lawmakers, but businesses: Don’t believe the hype. This is just another bubble. Separate hype from reality and don’t pour resources into something that sounds like magic, because there is no magic. Unfortunately, AI is not magic.
Resume: Black in AI, Google AI, Microsoft, Apple
© 2024 American City Business Journals. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated August 13, 2024) and Privacy Policy (updated July 3, 2024). The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of American City Business Journals.
source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

