‘A.I. Obama’ and Fake Newscasters: How A.I. Audio Is Swarming TikTok
Home/Technology / ‘A.I. Obama’ and Fake Newscasters: How A.I. Audio Is Swarming TikTok
‘A.I. Obama’ and Fake Newscasters: How A.I. Audio Is Swarming TikTok

In a slickly produced TikTok video, former President Barack Obama — or a voice eerily like his — can be heard defending himself against an explosive new conspiracy theory about the sudden death of his former chef.

“While I cannot comprehend the basis of the allegations made against me,” the voice says, “I urge everyone to remember the importance of unity, understanding and not rushing to judgments.”

In fact, the voice did not belong to the former president. It was a convincing fake, generated by artificial intelligence using sophisticated new tools that can clone real voices to create A.I. puppets with a few clicks of a mouse.

The technology used to create A.I. voices has gained traction and wide acclaim since companies like ElevenLabs released a slate of new tools late last year. Since then, audio fakes have rapidly become a new weapon on the online misinformation battlefield, threatening to turbocharge political disinformation ahead of the 2024 election by giving creators a way to put their conspiracy theories into the mouths of celebrities, newscasters and politicians.

The fake audio adds to the A.I.-generated threats from “deepfake” videos, humanlike writing from ChatGPT and images from services like Midjourney.

Disinformation watchdogs have noticed the number of videos containing A.I. voices has increased as content producers and misinformation peddlers adopt the novel tools. Social platforms like TikTok are scrambling to flag and label such content.

The video that sounded like Mr. Obama was discovered by NewsGuard, a company that monitors online misinformation. The video was published by one of 17 TikTok accounts pushing baseless claims with fake audio that NewsGuard identified, according to a report the group released in September. The accounts mostly published videos about celebrity rumors using narration from an A.I. voice, but also promoted the baseless claim that Mr. Obama is gay and the conspiracy theory that Oprah Winfrey is involved in the slave trade. The channels had collectively received hundreds of millions of views and comments that suggested some viewers believed the claims.

While the channels had no obvious political agenda, NewsGuard said, the use of A.I. voices to share mostly salacious gossip and rumors offered a road map for bad actors wanting to manipulate public opinion and share falsehoods to mass audiences online.

“It’s a way for these accounts to gain a foothold, to gain a following that can draw engagement from a wide audience,” said Jack Brewster, the enterprise editor at NewsGuard. “Once they have the credibility of having a large following, they can dip their toe into more conspiratorial content.”

TikTok requires labels disclosing realistic A.I.-generated content as fake, but they did not appear on the videos flagged by NewsGuard. TikTok said it had removed or stopped recommending several of the accounts and videos for violating policies around posing as news organizations and spreading harmful misinformation. It also removed the video using the A.I.-generated voice that mimicked Mr. Obama’s for violating TikTok’s synthetic media policy, as it contained highly realistic content not labeled altered or fake.

“TikTok is the first platform to provide a tool for creators to label A.I.-generated content and an inaugural member of a new code of industry best practices promoting the responsible use of synthetic media,” said Jamie Favazza, a spokeswoman for TikTok, referring to a recently introduced framework from the nonprofit Partnership on A.I.

Although NewsGuard’s report focused on TikTok, which has increasingly become a source of news, similar content was found spreading on YouTube, Instagram and Facebook.

Platforms like TikTok allow A.I.-generated content of public figures, including newscasters, so long as they do not spread misinformation. Parody videos showing A.I.-generated conversations between politicians, celebrities or business leaders — some dead — have spread widely since the tools became popular. Manipulated audio adds a new layer to deceptive videos on the platforms that have already featured fake versions of Tom Cruise, Elon Musk and newscasters like Gayle King and Norah O’Donnell. TikTok and other platforms have been grappling with a spate of misleading ads lately featuring deepfakes of celebrities like Mr. Cruise and the YouTube star Mr. Beast.

The power of these technologies could profoundly sway viewers. “We do know audio and video are perhaps more sticky in our memories than text,” said Claire Leibowicz, head of A.I. and media integrity at the Partnership on A.I., which has worked with technology and media companies on a set of recommendations for creating, sharing and distributing A.I.-generated content.

TikTok said last month that it was introducing a label that users could select to show whether their videos used A.I. In April, the app started requiring users to disclose manipulated media showing realistic scenes and prohibiting deepfakes of young people and private figures. David G. Rand, a professor of management science at the Massachusetts Institute of Technology whom TikTok consulted for advice on how to word the new labels, said the labels were of limited use when it came to misinformation because “the people who are trying to be deceptive are not going to put the label on their stuff.”

TikTok also said last month that it was testing automated tools to detect and label A.I.-generated media, which Mr. Rand said would be more helpful, at least in the short term.

YouTube bans political ads from using A.I. and requires other advertisers to label their ads when A.I. is used. Meta, which owns Facebook, added a label to its fact-checking tool kit in 2020 that describes whether a video is “altered.” And X, formerly known as Twitter, requires misleading content to be “significantly and deceptively altered, manipulated or fabricated” to violate its policies. The company did not respond to requests for comment.

Mr. Obama’s A.I. voice was created using tools from ElevenLabs, a company that burst onto the international stage late last year with its free-to-use A.I. text-to-speech tool capable of producing lifelike audio in seconds. The tool also allowed users to upload recordings of someone’s voice and produce a digital copy.

After the tool was released, users on 4chan, the right-wing message board, organized to create a fake version of the actor Emma Watson reading an anti-Semitic screed.

ElevenLabs, a company with 27 employees with headquarters in New York City, responded to the misuse by limiting the voice-cloning feature to paid users. The company also released an A.I. detection tool that is capable of identifying A.I. content produced by its services.

“Over 99 percent of users on our platform are creating interesting, innovative, useful content,” a representative for ElevenLabs said in an emailed statement, “but we recognize that there are instances of misuse, and we’ve been continually developing and releasing safeguards to curb them.”

In tests by The New York Times, ElevenLabs’ detector successfully identified audio from the TikTok accounts as A.I.-generated. But the tool failed when music was added to the clip or when the audio was distorted, suggesting that misinformation peddlers could easily elude detection.

A.I. companies and academics have explored other methods to identify fake audio, with mixed results. Some companies explored adding an invisible watermark to A.I. audio by embedding signals that it was A.I.-generated. Others have pushed A.I. companies to limit the voices that can be cloned, potentially banning replicas of politicians like Mr. Obama — a practice already in place with some image-generation tools like Dall-E, which refuses to generate some political imagery.

Ms. Leibowicz at the Partnership on A.I. said synthetic audio was uniquely challenging to flag for listeners compared with visual alterations.

“If we were a podcast, would you need a label every five seconds?” Ms. Leibowicz said. “How do you have a signal in some long piece of audio that’s consistent?”

Even if platforms adopt A.I. detectors, the technology must constantly improve to keep up with advances in A.I. generation.

TikTok said it was building new detection methods in-house and exploring options for outside partnerships.

“Big tech companies, multibillion-dollar or even trillion-dollar companies — they are unable to do it? That’s kind of surprising to me,” said Hafiz Malik, a professor at the University of Michigan-Dearborn who is developing A.I. audio detectors. “If they intentionally don’t want to do it? That’s understandable. But they cannot do it? I don’t accept it.”

Audio produced by Adrienne Hurst.



Source link