Artificial intelligence is making it hard to tell truth from fiction – Science News Explores
Come explore with us!
Experts report that AI is making it increasingly hard to trust what we see, hear or read
Earlier this year, mega-celebrity Taylor Swift (shown here) became the target of a deepfake disinformation campaign. People circulated compromising and realistic-looking AI-generated fake photos of her on social media sites. Increasingly, such AI-generated fake images, text and video can seem more convincing than the real thing.
Matt Winkelmeyer/Staff/Getty Images Entertainment
By
Taylor Swift has scores of newsworthy achievements, from dozens of music awards to several world records. But last January, the mega-star made headlines for something much worse and completely outside her control. She was a target of online abuse.
Someone had used artificial intelligence, or AI, to create fake nude images of Swift. These pictures flooded social media. Her fans quickly responded with calls to #ProtectTaylorSwift. But many people still saw the fake pictures.
That attack is just one example of the broad array of bogus media — including audio and visuals — that non-experts can now make easily with AI. Celebrities aren’t the only victims of such heinous attacks. Last year, for example, male classmates spread fake sexual images of girls at a New Jersey high school.
Free educator resources are available for this article. Register to access:
Already Registered? Enter your e-mail address above.
Readability Score: 7.1
AI-made pictures, audio clips or videos that masquerade as those of real people are known as deepfakes. This type of content has been used to put words in politicians’ mouths. In January, robocalls sent out a deepfake recording of President Joe Biden’s voice. It asked people not to vote in New Hampshire’s primary election. And a deepfake video of Moldovan President Maia Sandu last December seemed to support a pro-Russian political party leader.
AI has also produced false information about science and health. In late 2023, an Australian group fighting wind energy claimed there was research showing that newly proposed wind turbines could kill 400 whales a year. They pointed to a study seemingly published in Marine Policy. But an editor of that journal said the study didn’t exist. Apparently, someone used AI to mock up a fake article that falsely appeared to come from the journal.
Many people have used AI to lie. But AI can also misinform by accident. One research team posed questions about voting to five AI models. The models wrote answers that were often wrong and misleading, the team shared in a 2023 report for AI Democracy Projects.
Inaccurate information (misinformation) and outright lies (disinformation) have been around for years. But AI is making it easier, faster and cheaper to spread unreliable claims. And although some tools exist to spot or limit AI-generated fakes, experts worry these efforts will become an arms race. AI tools will get better and better, and groups trying to stop fake news will struggle to keep up.
The stakes are high. With a slew of more convincing fake files popping up across the internet, it’s hard to know who and what to trust.
Making realistic fake photos, news stories and other content used to need a lot of time and skill. That was especially true for deepfake audio and video clips. But AI has come a long way in just the last year. Now almost anyone can use generative AI to fabricate texts, pictures, audio or video — sometimes within minutes.
A group of healthcare researchers recently showed just how easy this can be. Using tools on OpenAI’s Playground platform, two team members produced 102 blog articles in about an hour. The pieces contained more than 17,000 words of persuasive false information about vaccines and vaping.
“It was surprising to discover how easily we could create disinformation,” says Ashley Hopkins. He’s a clinical epidemiologist — or disease detective — at Flinders University in Adelaide, Australia. He and his colleagues shared these findings last November in JAMA Internal Medicine.
People don’t need to oversee every bit of AI content creation, either. Websites can churn out false or misleading “news” stories with little or no oversight. Many of these sites tell you little about who’s behind them, says McKenzie Sadeghi. She’s an editor who focuses on AI and foreign influence at NewsGuard in Washington, D.C.
By May 2023, Sadeghi’s group had identified 49 such sites. Less than a year later, that number had skyrocketed to more than 750. Many have news-sounding names, such as Daily Time Update or iBusiness Day. But their “news” may be made-up events.
This graph shows how many websites NewsGuard found producing unreliable AI-generated news. The number jumped from 49 sites in May 2023 to more than 600 by December. By March 2024, NewsGuard’s Reality Check had found more than 750.
Generative AI models produce real-looking fakes in different ways. Text-writing models are generally designed to predict which words should follow others, explains Zain Sarwar. He’s a graduate student studying computer science at the University of Chicago in Illinois. AI models learn how to do this using huge amounts of existing text.
During training, the AI tries to predict which words will follow others. Then, it gets feedback on whether the words it picked are right. In this way, the AI learns to follow complex rules about grammar, word choice and more, Sarwar says. Those rules help the model write new material when humans ask for it.
AI models that make images work in a variety of ways. Some use a type of generative adversarial network, or GAN. The network contains two systems: a generator and a detective. The generator’s task is to produce better and better realistic images. The detective then hunts for signs that something is wrong with these fake images.
“These two models are trying to fight each other,” Sarwar says. But at some point, an image from the generator will fool the detective. That believably real image becomes the model’s output.
Another common way to make AI images is with a diffusion model. “It’s a forward and a backward procedure,” Sarwar says. The first part of training takes an image and adds random noise, or interference. Think about fuzzy pixels on old TVs with bad reception, he says. The model then removes layers of random noise over and over. Finally, it gets a clear image close to the original. Training does this process many times with many images. The model can then use what it learned to create new images for users.
AI models have become so good at their jobs that many people won’t recognize that the created content is fake.
AI-made content “is generally better than when humans create it,” says Todd Helmus. He’s a behavioral scientist with RAND Corporation in Washington, D.C. “Plain and simple, it looks real.”
Participants in one study rated both true and false posts made by AI (“synthetic”) as more accurate than those made by real humans (“organic”).
Green bars represent true tweets. Red bars represent false posts. The disinformation recognition score is the share of people who correctly rated posts as true or false.
In one study, people tried to judge whether tweets (now X posts) came from an AI model or real humans. People believed more of the AI models’ false posts than false posts written by humans. People also were more likely to believe the AI models’ true posts than true posts that had been written by humans.
Federico Germani and his colleagues shared these results in Science Advances last June. Germani studies disinformation at the University of Zurich in Switzerland. “The AI models we have now are really, really good at mimicking human language,” he says.
What’s more, AI models can now write with emotional language, much as people do. “So they kind of structure the information and the text in a way that is better at manipulating people,” Germani says.
People also have trouble telling fake images from real ones. A 2022 study in Vision Research showed that people could generally tell the difference between pictures of real faces and faces made with a GAN model from early 2019. But participants had trouble spotting realistic fake faces made by more advanced AI about a year later. In fact, people’s later assessments were no better than guesses.
This hints that people “often perceived the realistic artificial faces to be more authentic than the actual real faces,” says Michoel Moshel. Newer models “may be able to generate even more realistic images than the ones we used in our study,” he adds. He’s a graduate student at Macquarie University in Sydney, Australia, who worked on the research. He studies brain factors that play a role in thinking and learning.
Moshel’s team observed brain activity as people looked at images for the experiment. That activity differed when people looked at a picture of a real face versus an AI-made face. But the differences weren’t the same for each type of AI model. More research is needed to find out why.
Weekly updates to help you use Science News Explores in the learning environment
Thank you for signing up!
There was a problem signing you up.
Photos and videos used to be proof that some event happened. But with AI deepfakes floating around, that’s no longer true.
“I think the younger generation is going to learn not to just trust a photograph,” says Carl Vondrick. He’s a computer scientist at Columbia University in New York City. He spoke at a February 27 program there about the growing flood of AI content.
That lack of trust opens the door for politicians and others to deny something happened — even when non-faked video or audio shows that it had. In late 2023, for example, U.S. presidential candidate Donald Trump claimed that political foes had used AI in an ad that made him look feeble. In fact, Forbes reported, the ad appeared to show fumbles that really happened. Trump did not tell the truth.
As deepfakes become more common, experts worry about the liar’s dividend. “That dividend is that no information becomes trustworthy — [so] people don’t trust anything at all,” says Alondra Nelson. She’s a sociologist at the Institute for Advanced Study in Princeton, N.J.
The liar’s dividend makes it hard to hold public officials or others accountable for what they say or do. “Add on top of that a fairly constant sense that everything could be a deception,” Nelson says. That “is a recipe for really eroding the relationship that we need between us as individuals — and as communities and as societies.”
Lack of trust will undercut society’s sense of a shared reality, explains Ruth Mayo. She’s a psychologist at the Hebrew University of Jerusalem in Israel. Her work focuses on how people think and reason in social settings. “When we are in a distrust mindset,” she says, “we simply don’t believe anything — not even the truth.” That can hurt people’s ability to make well-informed decisions about elections, health, foreign affairs and more.
Some AI models have been built with guardrails to keep them from creating fake news, photos and videos. Rules built into a model can tell it not to do certain tasks. For example, someone might ask a model to churn out notices that claim to come from a government agency. The model should then tell the user it won’t do that.
In a recent study, Germani and his colleagues found that using polite language could speed up how quickly some models churn out disinformation. Those models learned how to respond to people using human-to-human interactions during training. And people often respond more positively when others are polite. So it’s likely that “the model has simply learned that statistically, it should operate this way,” Germani says. Wrongdoers might use that to manipulate a model to produce disinformation.
Researchers are working on ways to spot AI fakery. So far, though, there’s no surefire fix.
Sarwar was part of a team that tested several AI-detection tools. Each tool generally did a good job at spotting AI-made texts — if those texts were similar to what the tool had seen in training. The tools did not perform as well when researchers showed them texts that had been made with other AI models. The problem is that for any detection tool, “you cannot possibly train it on all possible texts,” Sarwar explains.
Submit your question here, and we might answer it an upcoming issue of Science News Explores
One AI-spotting tool did work better than others. Besides the basic steps other programs used, this one analyzed the proper nouns in a text. Proper nouns are words that name specific people, places and things. AI models sometimes mix these words up in their writing, and this helped the tool to better home in on fakes, Sarwar says. His team shared their findings on this at an IEEE conference last year.
But there are ways to get around those protections, said Germani at the University of Zurich.
Digital “watermarks” could also help verify real versus AI-made media. Some businesses already use logos or shading to label their photos or other materials. AI models could similarly insert labels into their outputs. That might be an obvious mark. Or it could be a subtle notation or a pattern in the computer code for text or an image. The label would then be a tip-off that AI had made these files.
In practice, that means there could be many, many watermarks. Some people might find ways to erase them from AI images. Others might find ways to put counterfeit AI watermarks on real content. Or people may ignore watermarks altogether.
In short, “watermarks aren’t foolproof — but labels help,” says Siddarth Srinivasan. He’s a computer scientist at Harvard University in Cambridge, Mass. He reviewed the role of watermarks in a January 2024 report.
Researchers will continue to improve tools to spot AI-produced files. Meanwhile, some people will keep working on ways to help AI evade detection. And AI will get even better at producing realistic material. “It’s an arms race,” says Helmus at RAND.
Laws can impose some limits on producing AI content. Yet there will never be a way to fully control AI, because these systems are always changing, says Nelson at the Institute for Advanced Studies. She thinks it might be better to focus on policies that require AI to do only good and beneficial tasks. So, no lying.
Last October, President Biden issued an executive order on controlling AI. It said that the federal government will use existing laws to combat fraud, bias, discrimination, privacy violations and other harms from AI. The U.S. Federal Communications Commission has already used a 1991 law to ban robocalls with AI-generated voices. And the U.S. Congress, which passes new laws, is considering further action.
Education is one of the best ways to avoid being taken in by AI fakery. People have to know that we can be — and often are — targeted by fakes, Helmus says.
When you see news, images or even audio, try to take it in as if it could be true or false, suggests Mayo at the Hebrew University of Jerusalem. Then try to evaluate its reliability. She shared that advice in the April issue of Current Opinion in Psychology.
Use caution in where you look for information, too, adds Hopkins at Flinders University. “Always seek medical information from reliable health sources, such as your doctor or pharmacist.” And be careful about online sources — especially social media and AI chatbots, he adds. Check out the authors and their backgrounds. See who runs and funds websites. Always see if you can confirm the “facts” somewhere else.
Nelson hopes that today’s kids and teens will help slow AI’s spread of bogus claims. “My hope,” she says, “is that this generation will be better equipped to look at text and video images and ask the right questions.”
ad: Short for advertisement. It may appear in any medium (print, online or broadcast) and has been prepared to sell someone on a product, idea or point of view.
array: A broad and organized group of objects. Sometimes they are instruments placed in a systematic fashion to collect information in a coordinated way. Other times, an array can refer to things that are laid out or displayed in a way that can make a broad range of related things, such as colors, visible at once. The term can even apply to a range of options or choices.
artificial intelligence: A type of knowledge-based decision-making exhibited by machines or computers. The term also refers to the field of study in which scientists try to create machines or computer software capable of intelligent behavior.
audio: Having to do with sound.
bias: The tendency to hold a particular perspective or preference that favors some thing, some group or some choice. Scientists often “blind” subjects to the details of a test (don’t tell them what it is) so that their biases will not affect the results.
blog: Short for web log, these internet posts can take the form of news reports, topical discussions, opinionated rants, diaries or photo galleries.
chatbot: A computer program created to seemingly converse with human users. Modern ones (such as Siri, Alexa, Ocelot and Sprinklr) can retrieve information over the internet about news events or classroom topics. Many even work as digital assistants to answer questions about purchases, products or scheduling on behalf of stores, pharmacies or banks.
clinical: (in medicine) A term that refers to diagnoses, treatments or experiments involving people.
code: (in computing) To use special language to write or revise a program that makes a computer do something. (n.) Code also refers to each of the particular parts of that programming that instructs a computer’s operations.
colleague: Someone who works with another; a co-worker or team member.
computer science: The scientific study of the principles and use of computers. Scientists who work in this field are known as computer scientists.
Congress: The part of the U.S. federal government charged with writing laws, setting the U.S. budget, and confirming many presidential appointments to the courts, to represent the U.S. government interests overseas and to run administrative agencies. The U.S. Congress is made of two parts: the Senate, consisting of two members from each state, and the House of Representatives, which consists of a total of 435 members, with at least one from each state (and dozens more for the states with the biggest populations).
democracy: A form of government where people are ruled by the people (as opposed to a king, for example) and where decisions are made by elected officials chosen by a majority of those who are ruled.
diffusion models: (in artificial intelligence) An approach to producing realistic AI-generated images. During training, the model takes an image and adds lots of noise or interference. Then it cleans up the image to get rid of the noise. The model can then use what it learned to create new images for users. The resulting images can look very real.
digital: (in computer science and engineering) An adjective indicating that something has been developed numerically on a computer or on some other electronic device, based on a binary system (where all numbers are displayed using a series of only zeros and ones).
discrimination: (in social science) An attitude of prejudice again people or things based on a bias about one or more of their attributes (such as race, sex, religion or age). It is not based on the actions of an individual but instead based on yet-unfounded expectations that are being applied broadly to a whole group.
epidemiologist: Like health detectives, these researchers look to link a particular illness to what might have caused it and/or allowed it to spread.
factor: Something that plays a role in a particular condition or event; a contributor.
Federal Communications Commission: An independent agency of the U.S. government, which is overseen by the U.S. Congress. It regulates communications and enforces U.S. laws covering communications between U.S. states and between the United States and other nations. Those communications can be by radio, television, wire, satellite and cable.
fraud: To cheat; or the resulting effects of something done by cheating. Or to make a mistake and intentionally cover up the error.
generation: A group of individuals (in any species) born at about the same time or that are regarded as a single group. Your parents belong to one generation of your family, for example, and your grandparents to another. Similarly, you and everyone within a few years of your age across the planet are referred to as belonging to a particular generation of humans. The term also is sometimes extended to year classes of other animals or to types of inanimate objects (such as electronics or automobiles).
generative adversarial network (or GAN): An approach to producing realistic AI-generated images. One system in the network is a generator. Its job is to produce better and better realistic images in response to feedback from another part that works like a detective. The detective hunts for signs that something is wrong with the generator’s fake images. At some point an image from the generator won’t fool the detective. That becomes the model’s output.
generative AI: A class of artificial-intelligence models that use deep learning and neural networks to generate — create — texts, pictures, audio or video in response to a user’s request.
graduate student: Someone working toward an advanced degree by taking classes and performing research. This work is done after the student has already graduated from college (usually with a four-year degree).
high school: A designation for grades nine through 12 in the U.S. system of compulsory public education. High-school graduates may apply to colleges for further, advanced education.
internal medicine: A branch of medicine where doctors diagnose and treat adults for conditions that don’t need surgery. Doctors who work in this field are known as internists.
marine: Having to do with the ocean world or environment.
media: A term for the ways information is delivered and shared within a society. It encompasses not only the traditional media — newspapers, magazines, radio and television — but also digital outlets, such as Twitter, Facebook, Instagram, TikTok and WhatsApp. The newer, digital media are sometimes referred to as social media. The singular form of this term is medium.
network: A group of interconnected people or things. (v.) The act of connecting with other people who work in a given area or do similar thing (such as artists, business leaders or medical-support groups), often by going to gatherings where such people would be expected, and then chatting them up. (n. networking)
pixel: Short for picture element. A tiny area of illumination on a computer screen, or a dot on a printed page, usually placed in an array to form a digital image. Photographs are made of thousands of pixels, each of different brightness and color, and each too small to be seen unless the image is magnified.
policy: A plan, stated guidelines or agreed-upon rules of action to apply in certain specific circumstances. For instance, a school could have a policy on when to permit snow days or how many excused absences it would allow a student in a given year.
political: (n. politics) An adjective that refers to the activities of people charged with governing towns, states, nations or other groups of people. It can involve deliberations over whether to create or change laws, the setting of policies for governed communities, and attempts to resolve conflicts between people or groups that want to change rules or taxes or the interpretation of laws. The people who take on these tasks as a job (profession) are known as politicians.
psychologist: A scientist or mental-health professional who studies the mind, especially in relation to actions and behaviors. Some work with people. Others may conduct experiments with animals (usually rodents) to test how their minds respond to different stimuli and conditions.
random: Something that occurs haphazardly or without reason, based on no intention or purpose. Or an adjective that describes some thing that found itself selected for no particular reason, or even chaotically.
social media: Digital media that allow people to connect with each other (often anonymously) and to share information. Examples include Twitter, Facebook, Instagram, TikTok and WhatsApp.
sociologist: A scientist who studies the behaviors of groups of people, how those behaviors developed, and the organizations that people create to support communities (societies) of people.
subtle: Adjective for something that may be important, but can be hard to see or describe. For instance, the first cellular changes that signal the start of a cancer may be only subtly different — as in small and hard to distinguish from nearby healthy tissues.
system: A network of parts that together work to achieve some function. For instance, the blood, vessels and heart are primary components of the human body’s circulatory system. Similarly, trains, platforms, tracks, roadway signals and overpasses are among the potential components of a nation’s railway system. System can even be applied to the processes or ideas that are part of some method or ordered set of procedures for getting a task done.
tweet: Message consisting of 280 or fewer characters that is available to people having an online account with X (formerly Twitter). Before November 2017 the limit had been 140 characters.
vaccine: (v. vaccinate) A biological mixture that resembles a disease-causing agent. It is given to help the body create immunity to a particular disease. The injections used to administer most vaccines are known as vaccinations.
vaping: (v. to vape) A slang term for the use of e-cigarettes because these devices emit vapor, not smoke. People who do this are referred to as vapers.
watermark: A subtle image imprinted on paper, usually visible only when held in a particular direction or up against light. This centuries-old technology helps establish a document as genuine to thwart counterfeiting. In the 1990s, new “digital watermarks” emerged that superimpose a light image on top of a document or photo (or sound on top of an audio file) to identify the creator.
wind turbine: A wind-powered device — similar to the type used to mill grain (windmills) long ago — used to generate electricity.
Journal: R. Mayo. Trust or distrust? Neither! The right mindset for confronting disinformation. Current Opinion in Psychology. Vol. 56, April 2024. doi: 10.1016/j.copsyc.2023.101779.
Report: A. Swenson and K. Chan. Election disinformation takes a big leap with AI being used to deceive worldwide. Associated Press. March 14, 2024.
Paper: R. Vinay et al. Emotional manipulation through prompt engineering amplifies disinformation generation in AI large language models. Submitted to arXiv March 6, 2024. doi: 10.48550/arXiv.2403.03550.
Report: J. Angwin, A. Nelson and R. Palta. Seeking reliable election information? Don’t trust AI. The AI Democracy Projects. February 27, 2024.
Meeting: S. Chang et al. Panel: Empirical and Technological Questions: Current Landscape, Challenges, and Opportunities. Symposium: Generative AI, Free Speech & Public Discourse. New York, N.Y. February 20, 2024.
Report: J. Shapero. FCC targets AI-generated robocalls after Biden primary deepfake. The Hill. February 1, 2024.
Journal: B. Menz et al. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance. JAMA Internal Medicine. Vol. 184, January 2024, p. 92. doi: 10.1001/jamainternmed.2023.5947.
Magazine: A. Nelson. The right way to regulate AI: Focus on its possibilities, not its perils. Foreign Affairs. January 12, 2024.
Report: S. Srinivasan. Detecting AI fingerprints: A guide to watermarking and beyond. Brookings Institution. Washington, D.C. January 4, 2024.
Magazine: M. Novak. Donald Trump falsely claims attack ad used AI to make him look bad. Forbes. December 4, 2023 (updated December 5).
Journal: B.D. Menz et al. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: Weapons of mass disinformation. JAMA Internal Medicine. Vol. 184, November 12, 2023, p. 92. doi: 10.1001/jamainternmed.2023.5947.
Journal: G. Spitale, N. Biller-Andorino and F. Germani. AI model GPT-3 (dis)informs us better than humans. Science Advances. Vol. 9, June 28, 2023. doi: 10.1126/sciadv.adh1850.
Meeting: J. Pu et al. Deepfake text detection: Limitations and opportunities. 2023 IEEE Symposium on Security and Privacy. Vol. 2023, May 23, 2023, p. 1613. San Francisco, Calif. doi: 10.1109/SP46215.2023.10179387.
Journal: T. Helmus. Artificial Intelligence, deepfakes, and disinformation: A primer. RAND. 2022. doi: https://doi.org/10.7249/PEA1043-1.
Journal: M. Moshel et al. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Research. Vol. 199, October 2022. doi: 10.1016/j.visres.2022.108079.
Kathiann Kowalski reports on all sorts of cutting-edge science. Previously, she practiced law with a large firm. Kathi enjoys hiking, sewing and reading. She also enjoys travel, especially family adventures and beach trips.
Founded in 2003, Science News Explores is a free, award-winning online publication dedicated to providing age-appropriate science news to learners, parents and educators. The publication, as well as Science News magazine, are published by the Society for Science, a nonprofit 501(c)(3) membership organization dedicated to public engagement in scientific research and education.
© Society for Science & the Public 2000–2024. All rights reserved.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!
Experts report that AI is making it increasingly hard to trust what we see, hear or read
Earlier this year, mega-celebrity Taylor Swift (shown here) became the target of a deepfake disinformation campaign. People circulated compromising and realistic-looking AI-generated fake photos of her on social media sites. Increasingly, such AI-generated fake images, text and video can seem more convincing than the real thing.
Matt Winkelmeyer/Staff/Getty Images Entertainment
By
Taylor Swift has scores of newsworthy achievements, from dozens of music awards to several world records. But last January, the mega-star made headlines for something much worse and completely outside her control. She was a target of online abuse.
Someone had used artificial intelligence, or AI, to create fake nude images of Swift. These pictures flooded social media. Her fans quickly responded with calls to #ProtectTaylorSwift. But many people still saw the fake pictures.
That attack is just one example of the broad array of bogus media — including audio and visuals — that non-experts can now make easily with AI. Celebrities aren’t the only victims of such heinous attacks. Last year, for example, male classmates spread fake sexual images of girls at a New Jersey high school.
Free educator resources are available for this article. Register to access:
Already Registered? Enter your e-mail address above.
Readability Score: 7.1
AI-made pictures, audio clips or videos that masquerade as those of real people are known as deepfakes. This type of content has been used to put words in politicians’ mouths. In January, robocalls sent out a deepfake recording of President Joe Biden’s voice. It asked people not to vote in New Hampshire’s primary election. And a deepfake video of Moldovan President Maia Sandu last December seemed to support a pro-Russian political party leader.
AI has also produced false information about science and health. In late 2023, an Australian group fighting wind energy claimed there was research showing that newly proposed wind turbines could kill 400 whales a year. They pointed to a study seemingly published in Marine Policy. But an editor of that journal said the study didn’t exist. Apparently, someone used AI to mock up a fake article that falsely appeared to come from the journal.
Many people have used AI to lie. But AI can also misinform by accident. One research team posed questions about voting to five AI models. The models wrote answers that were often wrong and misleading, the team shared in a 2023 report for AI Democracy Projects.
Inaccurate information (misinformation) and outright lies (disinformation) have been around for years. But AI is making it easier, faster and cheaper to spread unreliable claims. And although some tools exist to spot or limit AI-generated fakes, experts worry these efforts will become an arms race. AI tools will get better and better, and groups trying to stop fake news will struggle to keep up.
The stakes are high. With a slew of more convincing fake files popping up across the internet, it’s hard to know who and what to trust.
Making realistic fake photos, news stories and other content used to need a lot of time and skill. That was especially true for deepfake audio and video clips. But AI has come a long way in just the last year. Now almost anyone can use generative AI to fabricate texts, pictures, audio or video — sometimes within minutes.
A group of healthcare researchers recently showed just how easy this can be. Using tools on OpenAI’s Playground platform, two team members produced 102 blog articles in about an hour. The pieces contained more than 17,000 words of persuasive false information about vaccines and vaping.
“It was surprising to discover how easily we could create disinformation,” says Ashley Hopkins. He’s a clinical epidemiologist — or disease detective — at Flinders University in Adelaide, Australia. He and his colleagues shared these findings last November in JAMA Internal Medicine.
People don’t need to oversee every bit of AI content creation, either. Websites can churn out false or misleading “news” stories with little or no oversight. Many of these sites tell you little about who’s behind them, says McKenzie Sadeghi. She’s an editor who focuses on AI and foreign influence at NewsGuard in Washington, D.C.
By May 2023, Sadeghi’s group had identified 49 such sites. Less than a year later, that number had skyrocketed to more than 750. Many have news-sounding names, such as Daily Time Update or iBusiness Day. But their “news” may be made-up events.
This graph shows how many websites NewsGuard found producing unreliable AI-generated news. The number jumped from 49 sites in May 2023 to more than 600 by December. By March 2024, NewsGuard’s Reality Check had found more than 750.
Generative AI models produce real-looking fakes in different ways. Text-writing models are generally designed to predict which words should follow others, explains Zain Sarwar. He’s a graduate student studying computer science at the University of Chicago in Illinois. AI models learn how to do this using huge amounts of existing text.
During training, the AI tries to predict which words will follow others. Then, it gets feedback on whether the words it picked are right. In this way, the AI learns to follow complex rules about grammar, word choice and more, Sarwar says. Those rules help the model write new material when humans ask for it.
AI models that make images work in a variety of ways. Some use a type of generative adversarial network, or GAN. The network contains two systems: a generator and a detective. The generator’s task is to produce better and better realistic images. The detective then hunts for signs that something is wrong with these fake images.
“These two models are trying to fight each other,” Sarwar says. But at some point, an image from the generator will fool the detective. That believably real image becomes the model’s output.
Another common way to make AI images is with a diffusion model. “It’s a forward and a backward procedure,” Sarwar says. The first part of training takes an image and adds random noise, or interference. Think about fuzzy pixels on old TVs with bad reception, he says. The model then removes layers of random noise over and over. Finally, it gets a clear image close to the original. Training does this process many times with many images. The model can then use what it learned to create new images for users.
AI models have become so good at their jobs that many people won’t recognize that the created content is fake.
AI-made content “is generally better than when humans create it,” says Todd Helmus. He’s a behavioral scientist with RAND Corporation in Washington, D.C. “Plain and simple, it looks real.”
Participants in one study rated both true and false posts made by AI (“synthetic”) as more accurate than those made by real humans (“organic”).
Green bars represent true tweets. Red bars represent false posts. The disinformation recognition score is the share of people who correctly rated posts as true or false.
In one study, people tried to judge whether tweets (now X posts) came from an AI model or real humans. People believed more of the AI models’ false posts than false posts written by humans. People also were more likely to believe the AI models’ true posts than true posts that had been written by humans.
Federico Germani and his colleagues shared these results in Science Advances last June. Germani studies disinformation at the University of Zurich in Switzerland. “The AI models we have now are really, really good at mimicking human language,” he says.
What’s more, AI models can now write with emotional language, much as people do. “So they kind of structure the information and the text in a way that is better at manipulating people,” Germani says.
People also have trouble telling fake images from real ones. A 2022 study in Vision Research showed that people could generally tell the difference between pictures of real faces and faces made with a GAN model from early 2019. But participants had trouble spotting realistic fake faces made by more advanced AI about a year later. In fact, people’s later assessments were no better than guesses.
This hints that people “often perceived the realistic artificial faces to be more authentic than the actual real faces,” says Michoel Moshel. Newer models “may be able to generate even more realistic images than the ones we used in our study,” he adds. He’s a graduate student at Macquarie University in Sydney, Australia, who worked on the research. He studies brain factors that play a role in thinking and learning.
Moshel’s team observed brain activity as people looked at images for the experiment. That activity differed when people looked at a picture of a real face versus an AI-made face. But the differences weren’t the same for each type of AI model. More research is needed to find out why.
Weekly updates to help you use Science News Explores in the learning environment
Thank you for signing up!
There was a problem signing you up.
Photos and videos used to be proof that some event happened. But with AI deepfakes floating around, that’s no longer true.
“I think the younger generation is going to learn not to just trust a photograph,” says Carl Vondrick. He’s a computer scientist at Columbia University in New York City. He spoke at a February 27 program there about the growing flood of AI content.
That lack of trust opens the door for politicians and others to deny something happened — even when non-faked video or audio shows that it had. In late 2023, for example, U.S. presidential candidate Donald Trump claimed that political foes had used AI in an ad that made him look feeble. In fact, Forbes reported, the ad appeared to show fumbles that really happened. Trump did not tell the truth.
As deepfakes become more common, experts worry about the liar’s dividend. “That dividend is that no information becomes trustworthy — [so] people don’t trust anything at all,” says Alondra Nelson. She’s a sociologist at the Institute for Advanced Study in Princeton, N.J.
The liar’s dividend makes it hard to hold public officials or others accountable for what they say or do. “Add on top of that a fairly constant sense that everything could be a deception,” Nelson says. That “is a recipe for really eroding the relationship that we need between us as individuals — and as communities and as societies.”
Lack of trust will undercut society’s sense of a shared reality, explains Ruth Mayo. She’s a psychologist at the Hebrew University of Jerusalem in Israel. Her work focuses on how people think and reason in social settings. “When we are in a distrust mindset,” she says, “we simply don’t believe anything — not even the truth.” That can hurt people’s ability to make well-informed decisions about elections, health, foreign affairs and more.
Some AI models have been built with guardrails to keep them from creating fake news, photos and videos. Rules built into a model can tell it not to do certain tasks. For example, someone might ask a model to churn out notices that claim to come from a government agency. The model should then tell the user it won’t do that.
In a recent study, Germani and his colleagues found that using polite language could speed up how quickly some models churn out disinformation. Those models learned how to respond to people using human-to-human interactions during training. And people often respond more positively when others are polite. So it’s likely that “the model has simply learned that statistically, it should operate this way,” Germani says. Wrongdoers might use that to manipulate a model to produce disinformation.
Researchers are working on ways to spot AI fakery. So far, though, there’s no surefire fix.
Sarwar was part of a team that tested several AI-detection tools. Each tool generally did a good job at spotting AI-made texts — if those texts were similar to what the tool had seen in training. The tools did not perform as well when researchers showed them texts that had been made with other AI models. The problem is that for any detection tool, “you cannot possibly train it on all possible texts,” Sarwar explains.
Submit your question here, and we might answer it an upcoming issue of Science News Explores
One AI-spotting tool did work better than others. Besides the basic steps other programs used, this one analyzed the proper nouns in a text. Proper nouns are words that name specific people, places and things. AI models sometimes mix these words up in their writing, and this helped the tool to better home in on fakes, Sarwar says. His team shared their findings on this at an IEEE conference last year.
But there are ways to get around those protections, said Germani at the University of Zurich.
Digital “watermarks” could also help verify real versus AI-made media. Some businesses already use logos or shading to label their photos or other materials. AI models could similarly insert labels into their outputs. That might be an obvious mark. Or it could be a subtle notation or a pattern in the computer code for text or an image. The label would then be a tip-off that AI had made these files.
In practice, that means there could be many, many watermarks. Some people might find ways to erase them from AI images. Others might find ways to put counterfeit AI watermarks on real content. Or people may ignore watermarks altogether.
In short, “watermarks aren’t foolproof — but labels help,” says Siddarth Srinivasan. He’s a computer scientist at Harvard University in Cambridge, Mass. He reviewed the role of watermarks in a January 2024 report.
Researchers will continue to improve tools to spot AI-produced files. Meanwhile, some people will keep working on ways to help AI evade detection. And AI will get even better at producing realistic material. “It’s an arms race,” says Helmus at RAND.
Laws can impose some limits on producing AI content. Yet there will never be a way to fully control AI, because these systems are always changing, says Nelson at the Institute for Advanced Studies. She thinks it might be better to focus on policies that require AI to do only good and beneficial tasks. So, no lying.
Last October, President Biden issued an executive order on controlling AI. It said that the federal government will use existing laws to combat fraud, bias, discrimination, privacy violations and other harms from AI. The U.S. Federal Communications Commission has already used a 1991 law to ban robocalls with AI-generated voices. And the U.S. Congress, which passes new laws, is considering further action.
Education is one of the best ways to avoid being taken in by AI fakery. People have to know that we can be — and often are — targeted by fakes, Helmus says.
When you see news, images or even audio, try to take it in as if it could be true or false, suggests Mayo at the Hebrew University of Jerusalem. Then try to evaluate its reliability. She shared that advice in the April issue of Current Opinion in Psychology.
Use caution in where you look for information, too, adds Hopkins at Flinders University. “Always seek medical information from reliable health sources, such as your doctor or pharmacist.” And be careful about online sources — especially social media and AI chatbots, he adds. Check out the authors and their backgrounds. See who runs and funds websites. Always see if you can confirm the “facts” somewhere else.
Nelson hopes that today’s kids and teens will help slow AI’s spread of bogus claims. “My hope,” she says, “is that this generation will be better equipped to look at text and video images and ask the right questions.”
ad: Short for advertisement. It may appear in any medium (print, online or broadcast) and has been prepared to sell someone on a product, idea or point of view.
array: A broad and organized group of objects. Sometimes they are instruments placed in a systematic fashion to collect information in a coordinated way. Other times, an array can refer to things that are laid out or displayed in a way that can make a broad range of related things, such as colors, visible at once. The term can even apply to a range of options or choices.
artificial intelligence: A type of knowledge-based decision-making exhibited by machines or computers. The term also refers to the field of study in which scientists try to create machines or computer software capable of intelligent behavior.
audio: Having to do with sound.
bias: The tendency to hold a particular perspective or preference that favors some thing, some group or some choice. Scientists often “blind” subjects to the details of a test (don’t tell them what it is) so that their biases will not affect the results.
blog: Short for web log, these internet posts can take the form of news reports, topical discussions, opinionated rants, diaries or photo galleries.
chatbot: A computer program created to seemingly converse with human users. Modern ones (such as Siri, Alexa, Ocelot and Sprinklr) can retrieve information over the internet about news events or classroom topics. Many even work as digital assistants to answer questions about purchases, products or scheduling on behalf of stores, pharmacies or banks.
clinical: (in medicine) A term that refers to diagnoses, treatments or experiments involving people.
code: (in computing) To use special language to write or revise a program that makes a computer do something. (n.) Code also refers to each of the particular parts of that programming that instructs a computer’s operations.
colleague: Someone who works with another; a co-worker or team member.
computer science: The scientific study of the principles and use of computers. Scientists who work in this field are known as computer scientists.
Congress: The part of the U.S. federal government charged with writing laws, setting the U.S. budget, and confirming many presidential appointments to the courts, to represent the U.S. government interests overseas and to run administrative agencies. The U.S. Congress is made of two parts: the Senate, consisting of two members from each state, and the House of Representatives, which consists of a total of 435 members, with at least one from each state (and dozens more for the states with the biggest populations).
democracy: A form of government where people are ruled by the people (as opposed to a king, for example) and where decisions are made by elected officials chosen by a majority of those who are ruled.
diffusion models: (in artificial intelligence) An approach to producing realistic AI-generated images. During training, the model takes an image and adds lots of noise or interference. Then it cleans up the image to get rid of the noise. The model can then use what it learned to create new images for users. The resulting images can look very real.
digital: (in computer science and engineering) An adjective indicating that something has been developed numerically on a computer or on some other electronic device, based on a binary system (where all numbers are displayed using a series of only zeros and ones).
discrimination: (in social science) An attitude of prejudice again people or things based on a bias about one or more of their attributes (such as race, sex, religion or age). It is not based on the actions of an individual but instead based on yet-unfounded expectations that are being applied broadly to a whole group.
epidemiologist: Like health detectives, these researchers look to link a particular illness to what might have caused it and/or allowed it to spread.
factor: Something that plays a role in a particular condition or event; a contributor.
Federal Communications Commission: An independent agency of the U.S. government, which is overseen by the U.S. Congress. It regulates communications and enforces U.S. laws covering communications between U.S. states and between the United States and other nations. Those communications can be by radio, television, wire, satellite and cable.
fraud: To cheat; or the resulting effects of something done by cheating. Or to make a mistake and intentionally cover up the error.
generation: A group of individuals (in any species) born at about the same time or that are regarded as a single group. Your parents belong to one generation of your family, for example, and your grandparents to another. Similarly, you and everyone within a few years of your age across the planet are referred to as belonging to a particular generation of humans. The term also is sometimes extended to year classes of other animals or to types of inanimate objects (such as electronics or automobiles).
generative adversarial network (or GAN): An approach to producing realistic AI-generated images. One system in the network is a generator. Its job is to produce better and better realistic images in response to feedback from another part that works like a detective. The detective hunts for signs that something is wrong with the generator’s fake images. At some point an image from the generator won’t fool the detective. That becomes the model’s output.
generative AI: A class of artificial-intelligence models that use deep learning and neural networks to generate — create — texts, pictures, audio or video in response to a user’s request.
graduate student: Someone working toward an advanced degree by taking classes and performing research. This work is done after the student has already graduated from college (usually with a four-year degree).
high school: A designation for grades nine through 12 in the U.S. system of compulsory public education. High-school graduates may apply to colleges for further, advanced education.
internal medicine: A branch of medicine where doctors diagnose and treat adults for conditions that don’t need surgery. Doctors who work in this field are known as internists.
marine: Having to do with the ocean world or environment.
media: A term for the ways information is delivered and shared within a society. It encompasses not only the traditional media — newspapers, magazines, radio and television — but also digital outlets, such as Twitter, Facebook, Instagram, TikTok and WhatsApp. The newer, digital media are sometimes referred to as social media. The singular form of this term is medium.
network: A group of interconnected people or things. (v.) The act of connecting with other people who work in a given area or do similar thing (such as artists, business leaders or medical-support groups), often by going to gatherings where such people would be expected, and then chatting them up. (n. networking)
pixel: Short for picture element. A tiny area of illumination on a computer screen, or a dot on a printed page, usually placed in an array to form a digital image. Photographs are made of thousands of pixels, each of different brightness and color, and each too small to be seen unless the image is magnified.
policy: A plan, stated guidelines or agreed-upon rules of action to apply in certain specific circumstances. For instance, a school could have a policy on when to permit snow days or how many excused absences it would allow a student in a given year.
political: (n. politics) An adjective that refers to the activities of people charged with governing towns, states, nations or other groups of people. It can involve deliberations over whether to create or change laws, the setting of policies for governed communities, and attempts to resolve conflicts between people or groups that want to change rules or taxes or the interpretation of laws. The people who take on these tasks as a job (profession) are known as politicians.
psychologist: A scientist or mental-health professional who studies the mind, especially in relation to actions and behaviors. Some work with people. Others may conduct experiments with animals (usually rodents) to test how their minds respond to different stimuli and conditions.
random: Something that occurs haphazardly or without reason, based on no intention or purpose. Or an adjective that describes some thing that found itself selected for no particular reason, or even chaotically.
social media: Digital media that allow people to connect with each other (often anonymously) and to share information. Examples include Twitter, Facebook, Instagram, TikTok and WhatsApp.
sociologist: A scientist who studies the behaviors of groups of people, how those behaviors developed, and the organizations that people create to support communities (societies) of people.
subtle: Adjective for something that may be important, but can be hard to see or describe. For instance, the first cellular changes that signal the start of a cancer may be only subtly different — as in small and hard to distinguish from nearby healthy tissues.
system: A network of parts that together work to achieve some function. For instance, the blood, vessels and heart are primary components of the human body’s circulatory system. Similarly, trains, platforms, tracks, roadway signals and overpasses are among the potential components of a nation’s railway system. System can even be applied to the processes or ideas that are part of some method or ordered set of procedures for getting a task done.
tweet: Message consisting of 280 or fewer characters that is available to people having an online account with X (formerly Twitter). Before November 2017 the limit had been 140 characters.
vaccine: (v. vaccinate) A biological mixture that resembles a disease-causing agent. It is given to help the body create immunity to a particular disease. The injections used to administer most vaccines are known as vaccinations.
vaping: (v. to vape) A slang term for the use of e-cigarettes because these devices emit vapor, not smoke. People who do this are referred to as vapers.
watermark: A subtle image imprinted on paper, usually visible only when held in a particular direction or up against light. This centuries-old technology helps establish a document as genuine to thwart counterfeiting. In the 1990s, new “digital watermarks” emerged that superimpose a light image on top of a document or photo (or sound on top of an audio file) to identify the creator.
wind turbine: A wind-powered device — similar to the type used to mill grain (windmills) long ago — used to generate electricity.
Journal: R. Mayo. Trust or distrust? Neither! The right mindset for confronting disinformation. Current Opinion in Psychology. Vol. 56, April 2024. doi: 10.1016/j.copsyc.2023.101779.
Report: A. Swenson and K. Chan. Election disinformation takes a big leap with AI being used to deceive worldwide. Associated Press. March 14, 2024.
Paper: R. Vinay et al. Emotional manipulation through prompt engineering amplifies disinformation generation in AI large language models. Submitted to arXiv March 6, 2024. doi: 10.48550/arXiv.2403.03550.
Report: J. Angwin, A. Nelson and R. Palta. Seeking reliable election information? Don’t trust AI. The AI Democracy Projects. February 27, 2024.
Meeting: S. Chang et al. Panel: Empirical and Technological Questions: Current Landscape, Challenges, and Opportunities. Symposium: Generative AI, Free Speech & Public Discourse. New York, N.Y. February 20, 2024.
Report: J. Shapero. FCC targets AI-generated robocalls after Biden primary deepfake. The Hill. February 1, 2024.
Journal: B. Menz et al. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance. JAMA Internal Medicine. Vol. 184, January 2024, p. 92. doi: 10.1001/jamainternmed.2023.5947.
Magazine: A. Nelson. The right way to regulate AI: Focus on its possibilities, not its perils. Foreign Affairs. January 12, 2024.
Report: S. Srinivasan. Detecting AI fingerprints: A guide to watermarking and beyond. Brookings Institution. Washington, D.C. January 4, 2024.
Magazine: M. Novak. Donald Trump falsely claims attack ad used AI to make him look bad. Forbes. December 4, 2023 (updated December 5).
Journal: B.D. Menz et al. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: Weapons of mass disinformation. JAMA Internal Medicine. Vol. 184, November 12, 2023, p. 92. doi: 10.1001/jamainternmed.2023.5947.
Journal: G. Spitale, N. Biller-Andorino and F. Germani. AI model GPT-3 (dis)informs us better than humans. Science Advances. Vol. 9, June 28, 2023. doi: 10.1126/sciadv.adh1850.
Meeting: J. Pu et al. Deepfake text detection: Limitations and opportunities. 2023 IEEE Symposium on Security and Privacy. Vol. 2023, May 23, 2023, p. 1613. San Francisco, Calif. doi: 10.1109/SP46215.2023.10179387.
Journal: T. Helmus. Artificial Intelligence, deepfakes, and disinformation: A primer. RAND. 2022. doi: https://doi.org/10.7249/PEA1043-1.
Journal: M. Moshel et al. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Research. Vol. 199, October 2022. doi: 10.1016/j.visres.2022.108079.
Kathiann Kowalski reports on all sorts of cutting-edge science. Previously, she practiced law with a large firm. Kathi enjoys hiking, sewing and reading. She also enjoys travel, especially family adventures and beach trips.
Founded in 2003, Science News Explores is a free, award-winning online publication dedicated to providing age-appropriate science news to learners, parents and educators. The publication, as well as Science News magazine, are published by the Society for Science, a nonprofit 501(c)(3) membership organization dedicated to public engagement in scientific research and education.
© Society for Science & the Public 2000–2024. All rights reserved.
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

