Anthrow Circus

AI Generates Fresh Challenges for Journalists

STORY BY ERIC WISHART

The launch of Generative Artificial Intelligence tools capable of producing high quality text and images has triggered much discussion about their impact on the future of journalism.

AI was seen for many years as a useful tool that spared journalists from the drudgery of mundane tasks such as typing out company results and transcribing interviews.

That changed with the arrival of generative AI tools, such as ChatGPT, which seemed to pose an existential threat to the journalism profession.

Who needs reporters when you can upload the minutes of a town council meeting and ask ChatGPT to produce a news story focusing on a particular angle of community interest? Who needs photojournalists when a text-to-image generator can produce photorealistic images to illustrate a story?

Although generative AI has been a major discussion topic in journalism since it started to be widely used in early 2023, the use of automation in news gathering and publication is not new.

The Associated Press started generating automatic quarterly corporate earnings reports in 2014. Thanks to automation, the agency said it increased the volume of its quarterly earnings reports tenfold to 3,000, even as it freed journalists to do more original reporting.

However, the automated production of business or sports results from clean data is quite different from publishing news reports using AI tools. At a time when trust in the media has eroded, news consumers should be confident that what they are reading and seeing is the work of professional journalists and not third-party AI tools.

Newsrooms have long used narrow AI for specific tasks, such as speech-to-text transcription and automatic translation. The game changer was the arrival of generative artificial intelligence tools that mimic human creativity and produce high-quality text, images, and audio based on text prompts.

As AI usage has grown, media organizations have quickly drawn up guidelines for its use. For example, Agence France Presse news agency published detailed internal guidelines on the use of AI and added a new AI section to its code of ethics in the first major update of the document since it was published in 2016.

AFP does not directly publish generative AI content including text stories, images, graphics, videos, or audio, but its journalists can use GenAI to help research and prepare stories. AI tools can be used for tasks such as carrying out research, suggesting interview questions or story ideas, proposing headlines, and summarizing texts. They are also useful for providing quick translations and voice-to-text transcription.

However, the AFP guidelines warn the agency’s journalists that the technology is likely to produce “inaccurate, biased, stereotypical and dated results.” The guidelines stress the importance of checking everything that is produced by GenAI tools, which are notorious for “hallucinating,” the industry term for when they produce erroneous or nonsensical results.

The AFP guidelines are similar to those at media organizations large and small around the world, including the Associated Press, which added new AI terminology and information about generative AI to its 2023 Stylebook.

The AFP guidelines warn against “anthropomorphism,” attributing human characteristics to chatbots or other AI creations such as avatars. Journalists should not be lured into treating chatbots like sentient beings because of their human-like responses to prompts. Journalists can illustrate a story by quoting a chatbot response to a prompt but should not describe the result as an “opinion,” which implies the AI tool has the capacity to reason like a human being.

There are important security and legal considerations to be taken into account when using generative AI. For example, there is no guarantee of confidentiality, so journalists should not include private or sensitive information in their prompts. OpenAI says that user data is used to improve its products: “We may use the user’s prompts, the model’s responses, and other content such as images and files to improve model performance.” Journalists also should not upload sensitive interviews to online transcription software tools and put themselves and their interviewees at potential risk.

When it comes to intellectual property rights, the lack of transparency about what data is used to train large language models means that copyright-protected content can be recycled in the answers to prompts. This has set the stage for a battle between content creators and generative AI giants such as OpenAI, whose GPTBot crawls web pages for content that it can use to train its LLMs.

The way these models recycle original content has to led to lawsuits from major media, including the Wall Street Journal, which is suing Perplexity AI for “massive copyright infringement and trademark violations.” However, other media, including the Financial Times and The Atlantic, have signed licensing deals with AI companies.

When it comes to images, AI tools are being used to create photorealistic images that illustrate news events and are also employed by bad actors to spread disinformation, conspiracy theories, hoaxes, and propaganda.

After AI-generated images of the conflict in Gaza appeared on its site, Adobe Stock, which sells illustrations for commercial use, said in a blog post that it was updating its submission policies “to prohibit contributors from submitting generative AI content with titles that imply that it is depicting an actual newsworthy event.”

“Stock content should always be clearly marked when used in editorial content to help ensure people are not misled into thinking a real event is being depicted by the stock content.” 

Generative AI tools are prone to bias, and a report by Bloomberg news agency revealed just how much they amplify stereotypes and prejudices within society.

The report, headlined “Humans are biased. Generative AI is even worse,” analyzed the racial and gender breakdown of thousands of images created by the text-to-image generator Stable Diffusion based on a variety of prompts.

“The world according to Stable Diffusion is run by white male CEOs,” Bloomberg concluded. “Women are rarely doctors, lawyers, or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.”

Sensational headlines that greeted the arrival of generative AI asked if it could mean the end of journalism as we know it. While AI tools can streamline newsroom workflows and help journalists work more efficiently, direct human involvement and oversight are still essential for the production of trustworthy content. And, as the conflicts in Gaza and Ukraine have shown, there is still no substitute for having events covered by journalists on the ground.

website |  more from this author

Eric Wishart is the standards and ethics editor of Agence France-Presse, the international news agency. He is also the author of Journalism Ethics: 21 Essentials from Wars to Artificial Intelligence.

Wishart teaches principles of journalism at Hong Kong University and is the external examiner for Cardiff University’s School of Journalism. He is a member of the ethics committee of the Society of Professional Journalists in the United States and a member of the Organisation of News Ombuds and Standards Editors.

A native of Glasgow, Scotland, Wishart began his career in Scottish newspapers before joining AFP Paris in 1984. After postings in the Middle East and Asia, he became AFP’s first non-French global editor-in-chief in 1999.

Leave a Reply

Your email address will not be published. Required fields are marked *