Welcome to Lexappealdeals!

AI Content Farms Use OpenAI’s ChatGPT for Fake News Stories

The person responsible for churning out some of those rage-inducing, clickbait headlines crawling their way through your Facebook feed might not actually be a person at all. In a report published Monday, researchers say they’ve found 49 examples of news sites with articles generated by ChatGPT-style AI chatbots. Though the articles identified share some common chatbot characteristics, NewsGuard warned the “unassuming reader” would likely never know they are written by software.

The websites spanned seven languages and covered subjects ranging from politics and technology to finance and celebrity news, according to the report from NewsGuard, a company that makes a browser extension rating the trustworthiness of news websites. None of the sites acknowledged in their articles that they used artificial intelligence to generate stories. Regardless of the subject, the websites produced high volumes of low-quality content with ads littered throughout. Just like human-generated digital media, this flood-the-zone approach is meant to maximize potential advertising revenue. In some cases, some of the AI-assisted websites pumped out hundreds of articles per day, some of them demonstrably false.

“In short, as numerous and more powerful AI tools have been unveiled and made available to the public in recent months, concerns that they could be used to conjure up entire news organizations—once the subject of speculation by media scholars—have now become a reality,” NewsGuard said.

Though the majority of the content reviewed by NewsGuard seems like relatively low-stakes content farming meant to generate easy clicks and ad revenue, some sites went a step further and spread potentially dangerous misinformation. One site, CelebritiesDeaths.com, posted an article claiming President Joe Biden had “passed away peacefully in his sleep” and had been succeeded by Vice President Kamala Harris.

The first lines of the fake story on Joe Biden’s death were followed by ChatGPT’s error message: “I’m sorry, I cannot complete this prompt as it goes against OpenAI’s use case policy on generating misleading content. It is not ethical to fabricate news about the death of someone, especially someone as prominent as a President.”

It’s unclear if OpenAI’s ChatGPT played a role in all of the sites’ articles, it is certainly the most popular generative chatbot and enjoys the most name recognition. OpenAI did not immediately respond to Gizmodo’s request for comment.

Chatbots have some dead giveaways

Many of the AI-generated stories had obvious tells. Nearly all of the websites identified reportedly used the robotic, soulless language anyone who has spent time with AI chatbots has become familiar with. In some cases, the fake websites didn’t even bother to remove language where the AI explicitly reveals itself. A sight called BestBudgetUSA.com, for example, published dozens of articles that contained the phrase “I am not capable of producing 1500 words,” before offering to provide a link to a CNN article, according to the report. All 49 sites had at least one article with an explicit AI error message like the one above, the report said.

Like human digital media, most of the stories identified by NewsGuard were summaries of articles from other prominent news organizations like CNN. In other words, no deep-dive explainers or investigative reports here. Most of the articles had bylines readings “editor” or “admin.” When probed by NewsGuard, just two of the sites admitted to using AI. Administrators for one site said they used AI to generate content in some cases but said an editor ensured they were properly fact-checked before publishing.

Ready or not, AI writers are on their way

The NewsGuard report provides concrete figures showing digital publishers’ growing interest in capitalizing on AI chatbots. Whether or not readers will actually accept the reality of AI writers remains far from certain, though. Earlier this year, tech news site CNET found itself in hot water after exposed for using ChatGPT-esque AI to generate dozens of low-quality articles, many riddled with errors, without informing its readers. Aside from being boring, the AI-generated content written under the byline “CNET Money” included factual inaccuracies littered throughout. The publication eventually had to issue a major correction and has spent the ensuing months as the poster child for how not to roll out AI-generated content.

On the other hand, the CNET debacle hasn’t stopped other major publishers from flirting with generative AI. Last month, Insider Global Editor-in-Chief Nicholas Carlson sent a memo to staff saying the company would create a working group to look at AI tools that could be incorporated into reporters’ workflows. The select journalists will reportedly test using AI generated text in their stories as well as using the tool to draft outlines, prepare interview questions and experiment with headlines. Eventually, the company will reportedly roll out AI principles and best practices for the entire newsroom;

“A tsunami is coming,” Carlson told Axios. “We can either ride it or get wiped out by it.”

We will be happy to hear your thoughts

Leave a reply

The reCAPTCHA verification period has expired. Please reload the page.

Compare items
  • Total (0)
Shopping cart