Can you trust news generated using Artificial Intelligence?
artificial intelligence

16-Feb-2024, Updated on 2/17/2024 4:22:05 AM

Can you trust news generated using Artificial Intelligence?

Playing text to speech

Businesses are increasingly employing artificial intelligence (AI) to create media material, including news, in order to engage their customers. We're now seeing AI utilized for "gamification" of news, which means creating interactivity with news information.

For better or worse, artificial intelligence is altering the landscape of journalism. And if we want to safeguard this institution's integrity, we must grow up.

When you see the headlines on your favorite news app every morning, do you ever consider who — or what — authored the story? The presumption is that human beings are performing the task. However, it is plausible that an algorithm authored it. Artificial intelligence can generate text, graphics, and sounds with little to no human participation.

 

For example, the neural network known as Generative Pre-trained Transformer 3 (GPT-3) can generate writing that is nearly indistinguishable from human-written language, such as a fictitious novel, a poetry, or even computer code.

Major media sites, including The Washington Post, The New York Times, and Forbes, have automated news production using generative AI - AI algorithms that generate textual material automatically.

With significant breakthroughs in machine learning and natural language processing, the distinction between text written by a person and created by powerful neural networks such as GPT-3 can be indistinguishable, even in fields that are fundamentally humanistic, such as poetry.

As we rely more on AI-generated knowledge in everyday situations, the issue of trust becomes increasingly crucial.

Recent research has investigated whether consumers accept AI-generated news reports or trust AI-generated medical diagnosis.

They discovered that most individuals are skeptical about AI. A machine can produce an accurate, fact-filled article, but readers will still question its validity. While a software can provide a more accurate medical analysis than a human, patients are more inclined to follow their (human) doctor's advice.

Mistakes: AI Vs Human

The study concluded that if AI makes a mistake, people are more inclined to distrust it than an individual person. When a reporter commits an error, the reader is unlikely to conclude that all reporters are untrustworthy. Everyone makes errors. However, when AI makes a mistake, we are more prone to question the entire notion. Humans are capable of making mistakes and being forgiven for them. Not so, machines.

AI content is not commonly identified as such. It is rare for a news outlet to mention in its byline that the content was created by an algorithm. However, AI-generated data can be skewed or exploited, and ethicists and legislators have pushed firms to report its use in a transparent manner. If disclosure regulations are implemented, future headlines may have a byline identifying AI as the reporter.

The study investigated how disclosing the use of AI in news creation influences public perceptions of its veracity. The data clearly confirmed the AI aversion hypothesis: disclosing the usage of AI affected people to believe news items much less, which may be explained by decreased trust in AI reporters.

User
Written By
Writing is my thing. I enjoy crafting blog posts, articles, and marketing materials that connect with readers. I want to entertain and leave a mark with every piece I create. Teaching English compleme . . .

Comments

Solutions