Who owns your voice? Scarlett Johansson OpenAI complaint raises questions – Nature.com
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
You can also search for this author in PubMed Google Scholar
Scarlett Johansson has said she believes the OpenAI chatbot voice was intended to imitate her. Credit: Samir Hussein/WireImage via Getty
A kerfuffle erupted last week after actor Scarlett Johansson complained that one of OpenAI’s chatbot voices sounded a lot like her. It isn’t hers — the company created it using recordings from someone else. Nevertheless, the firm has suspended the voice out of respect for Johansson’s concerns. But the media flurry has cracked open a broader discussion about peoples’ rights to their own personas. In the age of generative artificial intelligence (genAI), are existing laws sufficient to protect the use of a person’s appearance and voice?
The answer isn’t always clear, says Carys Craig, an intellectual-property scholar at York University in Toronto, Canada, who will be speaking on this topic next month during a Canadian Bar Association webcast.
Several members of the US Congress have, in the past year, called for a federal law to enshrine such protections at the national level. And some legal scholars say that action is needed to improve privacy rights in the United States. But they also caution that hastily written laws might infringe on freedom of speech or create other problems. “It’s complicated,” says Meredith Rose, a legal analyst at the non-profit consumer-advocacy group Public Knowledge in Washington DC. “There’s a lot that can go wrong.”
“Rushing to regulate this might be a mistake,” Craig says.
GenAI can be used to easily clone voices or faces to create deepfakes, in which a person’s likeness is imitated digitally. People have made deepfakes for fun and to promote education or research. However, they’ve also been used to sow disinformation, attempt to sway elections, create non-consensual sexual imagery or scam people out of money.
Many countries have laws that prevent these kinds of harmful and nefarious activities, regardless of whether they involve AI, Craig says. But when it comes to specifically protecting a persona, existing laws might or might not be sufficient.
Copyright does not apply, says Craig, because it was designed to protect specific works. “From an intellectual-property perspective, the answer to whether we have rights over our voice, for example, is no,” she says. Most discussions about copyright and AI focus instead on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.
Aside from copyright laws, some regions, including some US states, have ‘publicity rights’ that allow an individual to control the commercial use of their image, to protect celebrities against financial loss. For example, in 1988, long before AI entered the scene, singer and actor Bette Midler won a ‘voice appropriation’ case against the Ford Motor Company, which had used a sound-alike singer to cover one of her songs in a commercial. And in 1992, game-show host Vanna White won a case against the US division of Samsung when it put a robot dressed as her in a commercial.
“We have a case about a person who won against a literal robot already,” says Rose. With AI entering the arena, she says, cases will become “increasingly bananas”.

How to stop AI deepfakes from sinking society — and science
Much remains to be tested in court. The rapper Drake, for example, last month released a song featuring AI-generated voice clips of the late rapper Tupac Shakur. Drake removed the song from streaming services after receiving a cease-and-desist letter from Shakur’s estate. But it’s unclear, says Craig, whether the song’s AI component was unlawful. In Tennessee, a law passed this year, called the Ensuring Likeness Voice and Image Security (ELVIS) Act, seeks to protect voice actors at all levels of fame from “the unfair exploitation of their voices”, including the use of AI clones.
In the United States, actors have some contractual protection against AI — the agreement that in December ended the Hollywood strike of the Screen Actors Guild-American Federation of Television and Radio Artists included provisions to stop filmmakers from using a digital replica of an actor without explicit consent from the individual in each case.
Meanwhile, individual tech companies have their own policies to help prevent genAI misuse. For example, OpenAI, based in San Francisco, California, has not released to the general public the voice-cloning software that was used to make its chatbot voices, acknowledging that “generating speech that resembles people’s voices has serious risks”. Usage policies for partners testing the technology “prohibit the impersonation of another individual or organization without consent or legal right”.
Others are pursuing technological approaches to stemming misuse: last month, the US Federal Trade Commission announced the winners of its challenge to “protect consumers from the misuse of artificial intelligence-enabled voice cloning for fraud and other harms”. These include ways to watermark real audio at the time of recording and tools for detecting genAI-produced audio.
More worrying than loss of income for actors, say Rose and Craig, is the use of AI to clone people’s likenesses for uses including non-consensual pornography. “We have very spare, inadequate laws about non-consensual imagery in the first place, let alone with AI,” says Rose. The fact that deepfake porn is now easy to generate, including with minors’ likenesses, should be serious cause for alarm, she adds. Some legal scholars, including Danielle Citron at the University of Virginia in Charlottesville, are advocating for legal reforms that would recognize ‘intimate privacy’ as a US civil right — comparable to the right to vote or the right to a fair trial.
Current publicity-rights laws aren’t well suited to covering non-famous people, Rose says. “Right to publicity is built around recognizable, distinctive people in commercial applications,” she says. “That makes sense for Scarlett Johansson, but not for a 16-year-old girl being used in non-consensual imagery.”
However, proposals to extend publicity rights to private individuals in the United States might have unintended consequences, says Rose. She has written to the US Congress expressing concern that some of the proposed legislation could allow misuse by powerful companies. A smartphone app for creating novelty photos, for example, could insert a provision into its terms of service that “grants the app an unrestricted, irrevocable license to make use of the user’s likeness”.
There’s also a doppelganger problem, says Rose: an image or voice of a person randomly generated by AI is bound to look and sound like at least one real person, who might then seek compensation.
Laws designed to protect people can run the risk of going too far and threatening free speech. “When you have rights that are too expansive, you limit free expression,” Craig says. “The limits on what we allow copyright owners to control are there for a reason; to allow people to be inspired and create new things and contribute to the cultural conversation,” she says. Parody and other works that build on and transform an original often fall into the sphere of lawful fair use, as they should, she says. “An overly tight version [of these laws] would annihilate parody,” says Rose.
doi: https://doi.org/10.1038/d41586-024-01578-4
Reprints and permissions
How to stop AI deepfakes from sinking society — and science
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI image generators often give racist and sexist results: can they be fixed?
What the EU’s tough AI law means for research and ChatGPT
What the OpenAI drama means for AI progress — and safety
Rules to keep AI in check: nations carve different paths for tech regulation
A Multimodal Generative AI Copilot for Human Pathology
Article
Need a policy for using ChatGPT in the classroom? Try asking students
Career Column
Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten
Editorial
FBI asks scientists for trust in taking anti-Asian bias seriously
News
Why babies in South Korea are suing the government
News
US TikTok ban: how the looming restriction is affecting scientists on the app
News
Experiment-free exoskeleton assistance via learning in simulation
Article
Meta’s AI translation model embraces overlooked languages
News & Views
Scaling neural machine translation to 200 languages
Article
In the compelling and unstoppable research in favor of cellular rejuvenation, Sentcell LTD is looking for PhD or Postdoc Researchers to investigate…
Toscana Life Sciences, within the GSK Campus, Siena, Italy
Sentcell LTD
At the Department of Biomedicine, there is 1-2 permanent positions available as Professor or Associate Professor of Biomedicine.
Bergen (By), Vestland (NO)
University of Bergen (UIB)
Invitation to the 2024 International Young Medical Scholars Forum hosted by China Medical University
Shenyang, Liaoning, China
China Medical University (CMU)
We seek exceptional candidates to lead vigorous independent research programs working in any area of neurobiology.
Hangzhou, Zhejiang, China
School of Life Sciences, Westlake University
Faculty positions are open at four distinct ranks: Assistant Professor, Associate Professor, Full Professor, and Chair Professor.
Hangzhou, Zhejiang, China
Westlake University
You have full access to this article via your institution.
How to stop AI deepfakes from sinking society — and science
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI image generators often give racist and sexist results: can they be fixed?
What the EU’s tough AI law means for research and ChatGPT
What the OpenAI drama means for AI progress — and safety
Rules to keep AI in check: nations carve different paths for tech regulation
An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
© 2024 Springer Nature Limited
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Advertisement
You can also search for this author in PubMed Google Scholar
Scarlett Johansson has said she believes the OpenAI chatbot voice was intended to imitate her. Credit: Samir Hussein/WireImage via Getty
A kerfuffle erupted last week after actor Scarlett Johansson complained that one of OpenAI’s chatbot voices sounded a lot like her. It isn’t hers — the company created it using recordings from someone else. Nevertheless, the firm has suspended the voice out of respect for Johansson’s concerns. But the media flurry has cracked open a broader discussion about peoples’ rights to their own personas. In the age of generative artificial intelligence (genAI), are existing laws sufficient to protect the use of a person’s appearance and voice?
The answer isn’t always clear, says Carys Craig, an intellectual-property scholar at York University in Toronto, Canada, who will be speaking on this topic next month during a Canadian Bar Association webcast.
Several members of the US Congress have, in the past year, called for a federal law to enshrine such protections at the national level. And some legal scholars say that action is needed to improve privacy rights in the United States. But they also caution that hastily written laws might infringe on freedom of speech or create other problems. “It’s complicated,” says Meredith Rose, a legal analyst at the non-profit consumer-advocacy group Public Knowledge in Washington DC. “There’s a lot that can go wrong.”
“Rushing to regulate this might be a mistake,” Craig says.
GenAI can be used to easily clone voices or faces to create deepfakes, in which a person’s likeness is imitated digitally. People have made deepfakes for fun and to promote education or research. However, they’ve also been used to sow disinformation, attempt to sway elections, create non-consensual sexual imagery or scam people out of money.
Many countries have laws that prevent these kinds of harmful and nefarious activities, regardless of whether they involve AI, Craig says. But when it comes to specifically protecting a persona, existing laws might or might not be sufficient.
Copyright does not apply, says Craig, because it was designed to protect specific works. “From an intellectual-property perspective, the answer to whether we have rights over our voice, for example, is no,” she says. Most discussions about copyright and AI focus instead on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.
Aside from copyright laws, some regions, including some US states, have ‘publicity rights’ that allow an individual to control the commercial use of their image, to protect celebrities against financial loss. For example, in 1988, long before AI entered the scene, singer and actor Bette Midler won a ‘voice appropriation’ case against the Ford Motor Company, which had used a sound-alike singer to cover one of her songs in a commercial. And in 1992, game-show host Vanna White won a case against the US division of Samsung when it put a robot dressed as her in a commercial.
“We have a case about a person who won against a literal robot already,” says Rose. With AI entering the arena, she says, cases will become “increasingly bananas”.

How to stop AI deepfakes from sinking society — and science
How to stop AI deepfakes from sinking society — and science
Much remains to be tested in court. The rapper Drake, for example, last month released a song featuring AI-generated voice clips of the late rapper Tupac Shakur. Drake removed the song from streaming services after receiving a cease-and-desist letter from Shakur’s estate. But it’s unclear, says Craig, whether the song’s AI component was unlawful. In Tennessee, a law passed this year, called the Ensuring Likeness Voice and Image Security (ELVIS) Act, seeks to protect voice actors at all levels of fame from “the unfair exploitation of their voices”, including the use of AI clones.
In the United States, actors have some contractual protection against AI — the agreement that in December ended the Hollywood strike of the Screen Actors Guild-American Federation of Television and Radio Artists included provisions to stop filmmakers from using a digital replica of an actor without explicit consent from the individual in each case.
Meanwhile, individual tech companies have their own policies to help prevent genAI misuse. For example, OpenAI, based in San Francisco, California, has not released to the general public the voice-cloning software that was used to make its chatbot voices, acknowledging that “generating speech that resembles people’s voices has serious risks”. Usage policies for partners testing the technology “prohibit the impersonation of another individual or organization without consent or legal right”.
Others are pursuing technological approaches to stemming misuse: last month, the US Federal Trade Commission announced the winners of its challenge to “protect consumers from the misuse of artificial intelligence-enabled voice cloning for fraud and other harms”. These include ways to watermark real audio at the time of recording and tools for detecting genAI-produced audio.
More worrying than loss of income for actors, say Rose and Craig, is the use of AI to clone people’s likenesses for uses including non-consensual pornography. “We have very spare, inadequate laws about non-consensual imagery in the first place, let alone with AI,” says Rose. The fact that deepfake porn is now easy to generate, including with minors’ likenesses, should be serious cause for alarm, she adds. Some legal scholars, including Danielle Citron at the University of Virginia in Charlottesville, are advocating for legal reforms that would recognize ‘intimate privacy’ as a US civil right — comparable to the right to vote or the right to a fair trial.
Current publicity-rights laws aren’t well suited to covering non-famous people, Rose says. “Right to publicity is built around recognizable, distinctive people in commercial applications,” she says. “That makes sense for Scarlett Johansson, but not for a 16-year-old girl being used in non-consensual imagery.”
However, proposals to extend publicity rights to private individuals in the United States might have unintended consequences, says Rose. She has written to the US Congress expressing concern that some of the proposed legislation could allow misuse by powerful companies. A smartphone app for creating novelty photos, for example, could insert a provision into its terms of service that “grants the app an unrestricted, irrevocable license to make use of the user’s likeness”.
There’s also a doppelganger problem, says Rose: an image or voice of a person randomly generated by AI is bound to look and sound like at least one real person, who might then seek compensation.
Laws designed to protect people can run the risk of going too far and threatening free speech. “When you have rights that are too expansive, you limit free expression,” Craig says. “The limits on what we allow copyright owners to control are there for a reason; to allow people to be inspired and create new things and contribute to the cultural conversation,” she says. Parody and other works that build on and transform an original often fall into the sphere of lawful fair use, as they should, she says. “An overly tight version [of these laws] would annihilate parody,” says Rose.
doi: https://doi.org/10.1038/d41586-024-01578-4
Reprints and permissions
How to stop AI deepfakes from sinking society — and science
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI image generators often give racist and sexist results: can they be fixed?
What the EU’s tough AI law means for research and ChatGPT
What the OpenAI drama means for AI progress — and safety
Rules to keep AI in check: nations carve different paths for tech regulation A Multimodal Generative AI Copilot for Human Pathology
Article
Need a policy for using ChatGPT in the classroom? Try asking students
Career Column
Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten
Editorial
FBI asks scientists for trust in taking anti-Asian bias seriously
News
Why babies in South Korea are suing the government
News
US TikTok ban: how the looming restriction is affecting scientists on the app
News
Experiment-free exoskeleton assistance via learning in simulation
Article
Meta’s AI translation model embraces overlooked languages
News & Views
Scaling neural machine translation to 200 languages
Article
In the compelling and unstoppable research in favor of cellular rejuvenation, Sentcell LTD is looking for PhD or Postdoc Researchers to investigate…
Toscana Life Sciences, within the GSK Campus, Siena, Italy
Sentcell LTD
At the Department of Biomedicine, there is 1-2 permanent positions available as Professor or Associate Professor of Biomedicine.
Bergen (By), Vestland (NO)
University of Bergen (UIB)
Invitation to the 2024 International Young Medical Scholars Forum hosted by China Medical University
Shenyang, Liaoning, China
China Medical University (CMU)
We seek exceptional candidates to lead vigorous independent research programs working in any area of neurobiology.
Hangzhou, Zhejiang, China
School of Life Sciences, Westlake University
Faculty positions are open at four distinct ranks: Assistant Professor, Associate Professor, Full Professor, and Chair Professor.
Hangzhou, Zhejiang, China
Westlake University
You have full access to this article via your institution.
How to stop AI deepfakes from sinking society — and science
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI image generators often give racist and sexist results: can they be fixed?
What the EU’s tough AI law means for research and ChatGPT
What the OpenAI drama means for AI progress — and safety
Rules to keep AI in check: nations carve different paths for tech regulation An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
© 2024 Springer Nature Limited
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

