NewsReports

ANALYSIS: AI And Africa’s ‘Counterfeit’ Journalists

AI technology is ‘shooting the messenger’ by impersonating professional journalists to gain credibility for false narratives.

Not long ago, journalism tools comprised a notebook, a typewriter and possibly some coins for a telephone box to phone copy into the news desk. Then computers, email and cell phones brought speed, connectivity and the potential to demand greater accountability of the world’s leaders, as the media took advantage of improved communications technologies.

After the 1993 Black Hawk Down incident in Somalia, which was broadcast around the world and shaped the future of counter-terrorism operations, the 2002 Iraq war marked a new era of 24/7 news broadcasting.

Round-the-clock coverage enabled newsrooms to expose global events in real time. The Broadband Global Area Network satellite system relayed quality images and audio at speed. It was replaced by systems like LiveU, an industry standard using multiple cellphone connections to deliver broadcast news.

Each new iteration of technology has left the basic principles of news journalism intact. Key to the profession is the ability to hold power to account as part of the constitutionally guaranteed right to a free press.

However, a worrying trend is the abuse of emerging technologies to pollute the information environment. This can be done by distorting what we see or read, manipulating how information is delivered, or impersonating the messenger. That is, mimicking journalists, who traditionally have been considered a credible source for fact-based information.

The trend of AI-driven avatar journalists impersonating investigative reporters across Africa is captured in a recent Konrad Adenauer Foundation report.

“Democracy relies on pluralism, that is, many opinions that lead to societal and political decisions,” argues Hendrik Sittig, director of the organisation’s Media Programme for Sub-Saharan Africa. “However, the information by which we form our opinions must be … fact-based and true … anything else could have tragic and devastating consequences.’

The report identifies how – as well as AI-generated deepfake attacks on journalists and influencers – AI avatars are being created to impersonate journalists and deliver narratives at scale and speed. This is part of information operations, or foreign information manipulation and interference (FIMI), which is used as a tool of geopolitical influence.

An investigation highlighted by the African Digital Democracy Observatory, discovered that Israeli cybersecurity company, Percepto International, used such techniques to create a fake French-Ghanaian investigative journalist with her own social media profile and website. She was used to insert material into mass African media, and designed to “smear local politicians and international organisations in Africa with fabricated revelations.”

Creating a fake investigative journalist arguably lends credibility to a story while trampling on the code of practice that underpins professional journalism. The investigation found multiple examples of such avatar journalists.

This tactic appears to mimic the earlier use of fake personae in prominent political advertising campaigns in Burkina Faso, where literal international actors appeared to lend support to the country’s September 2022 coup leaders. The firm Graphika observed a similar technique used to distribute pro-Chinese Communist Party propaganda via supposedly fake websites populated by avatars.

While the technology behind AI avatar creation is designed mainly for training and marketing, malign actors use it to undermine public trust. The technology is not at fault – instead, the problem is the absence of guardrails surrounding its use.

More recently, an Al Jazeera investigation found that ‘ghost reporters’ were writing pro-Russia propaganda in West and Central Africa, using the identities of deceased people, rebranded as investigative journalists. The content is posted on news outlets across the continent. Articles published in at least 12 African countries linked the practice to what the Al Jazeera team called a coordinated “pro-Russia influence campaign.”

Using new technologies to pollute the information environment or sow doubt about established facts poses a threat to democracies. In fragile democracies like many across Africa, the absence of robust professional media to counter such disinformation is concerning to those who care about information integrity.

While African leaders are embracing the benefits of digital technology and generative AI, for example through the African Union’s Continental Artificial Intelligence Strategy, a sense of urgency in understanding the risks appears lacking.

Another danger is the erosion of traditional professional media, both in terms of funding and staffing. One senior editor told Institute for Security Studies (ISS) researchers that during South Africa’s May 2024 elections, some of her young journalist team struggled to understand their role in the democratic process.

Notwithstanding those concerns, a separate ISS study found that professional journalism acted as a bulwark against mis/disinformation campaigns during that election.

There are some efforts to educate the public about the dangers of a distorted information environment. The Konrad Adenauer Foundation recently published a Marvel-style comic book described as ‘a journalist’s quest for justice in the age of AI,’ written and illustrated by a South African team. This, plus initiatives from organisations such as Media Monitoring Africa, are notable exceptions in promoting digital literacy and could inspire future interventions.

While much of the current debate in South Africa has focused on social media platforms and regulating online content – particularly in the context of recent white supremacist narratives – this is unlikely to catch the journalist impersonators using generative AI.

An Artificial Intelligence Act like that recently introduced in Europe may not, and arguably should not, be replicated across Africa because of what many argue is the lack of data, capacity and inequalities in access.

South African-based organisations like Research ICT Africa say it will be hard to measure risks and develop resilience measures unless tech companies grant research and monitoring organisations access to online data as they do in the global north.

Hardwiring AI awareness into newsroom training may also be a practical solution to consider on the supply side. On the demand side, public campaigns are needed that frame professional journalism as a public good and highlight the danger of it being undermined by counterfeits – just as one would with fake brands.

Karen Allen, Consultant, Institute for Security Studies (ISS) Pretoria

(This article was first published by ISS Today, a Premium Times syndication partner. We have their permission to republish).

PREMIUM TIMES

Comment here