AI for news: How it’s being used, common challenges, and solutions

“As knowledge workers, we are all becoming information officers,” said computational journalist Francesco Marconi. “The reality is that AI in data will become a crucial tool in our functions as we go forward.” 

Marconi discussed how artificial intelligence is changing media at a program on March 26 hosted by the National Press Club Journalism Institute in partnership with the National Press Club’s communicators team. 

The practice of journalism originated to solve a scarcity of information problem and to standardize the dissemination of data more efficiently, Marconi said. 

“Now we have reached an inflection point. There’s too much information. And the role of the journalist, the role of the communicator, shifts from being the gatherer of new information to becoming more of a filter in the contextualize of bits of data.”

Enter editorial algorithms. 

These systems are coded with journalistic principles and work to filter out irrelevant information and find newsworthy events. 

AI and opportunities for newsrooms

The implementation of AI technologies can create new jobs and responsibilities for journalists:

  • Automation editors plan how workflows can be augmented through AI and ensure its editorial reliability.
  • Computational journalists apply computer and data science methods to develop editorial algorithms. 
  • Newsroom tool managers coordinate the implementation of new platforms and train journalists how to deploy them. 
  • AI ethics editors ensure transparency and explainability of algorithms as well as their use of training data.

Common AI pitfalls (and solutions)

Algorithms used in journalism must continually be audited. 

“The same way journalists ask questions to human sources, they should be able to ask questions to algorithms, to understand their inner workings,” Marconi said. “This allows for the creation of reliable editorial algorithms.” 

Solution: Human journalists have a critical role at each step of the data-to-text automation process by providing important editorial oversight. 

Algorithms can make mistakes — especially when it comes to diversity, equity and representation.

Data from the Bureau of Labor Statistics show that the professionals who write AI programs are still largely white and male. 

Researchers at the MIT Media Lab looked at biases and discovered that computer vision algorithms showed an error rate of 35% for darker-skinned women when compared to just a 1% error rate for lighter-skinned men, Marconi said

“In this instance, the AI systems — or the facial recognition systems — that were tested and being trained with data that over-indexed for white males,” he said. “What happens is that the machines propagate the biases of the data. And, if you think about it, that data will also propagate the societal biases that exist. So that’s why algorithmic auditing and issues related to fairness and transparency are so important.”

Solution: Fostering a diverse AI team can help mitigate unwanted AI biases and ensure fairness.

A checklist for addressing AI bias:

  • Establish regular audits to check for biases in the datasets.
  • Determine how to include humans in the loop throughout the AI system.
  • Implement transparency standards for AI in the same way newsrooms have editorial standards.
  • Follow a clear strategy for bringing diverse perspectives to AI teams through training, hiring, and partnerships.

Understand the prevalence of “deep fakes.”

Deep fakes are videos, images, text or audio files generated or altered with the help of artificial intelligence to deceive an audience into thinking they are real.

“The reality is that fake news will always exist, and so deep fakes are just another variation of fake news,” said Marconi. “The best way of addressing this is to follow the journalistic principles and the standards that have been followed for hundreds of years.” 

Solution: Learn four approaches to detect deep fakes:

  • Lack of blinking. People shown in deep fakes often don’t close their eyes. 
  • Pulse signal. Researchers at MIT developed a technique to detect the pulse of a person by color amplification, which is often missing in deep fakes.
  • Image blurriness. If you look at still images of a deep fake, you can see things like the flickering and blurriness around the mouth or face, or discrepancies between the face, body type and skin color.
  • Machine learning. Algorithms can be trained to detect fake imagery and provide a confidence score on whether the footage is real or fake.

AI can also perpetuate information bubbles. 

With AI personalization, there is a risk of creating information bubbles if readers are only shown the views of the world they already agree with, Marconi pointed out.

Solution: Utilize existing tools to help understand partisan media consumption and social media influence. Examples include The Markup’s Citizen Browser Project and Ground News’ Blindspotter (read our Q&A with Ground News CEO Harleen Kaur).

Other key takeaways:

  • Technology changes, journalistic standards don’t.
  • AI is susceptible to the same biases as humans.
  • Journalists can best leverage AI once they experiment with the technology.
  • There are ethical considerations inherent in journalism’s use of AI.

Click here to download program slides.

Other resources:

Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ben Jones
Ben Jones
2 years ago

Also worth checking out primer.ai snd the Analyze Product. Used by some of the world’s intelligence agencies.