In the days after the killing of rapper Kiernan Jarryd Forbes, known as AKA, and his friend Tebello “Tibz” Motsoane, the murders kept playing out on social media. Again and again, leaked CCTV footage of the two being gunned down was viewed and shared – some 490,000 times in the version of just one Twitter account.
The explosive viral spread of the grainy but dramatic footage shows the limits of mainstream media ethics. Beyond the reach of press and broadcast codes and complaints mechanisms, social media platforms are driven by algorithms that measure and reward success by the millions of clicks. This often means boosting the worst and most sensational material. It’s urgently necessary to find ways of ensuring the platforms show greater responsibility.
Mainstream media ethics, as captured in the South African Press Code and the Broadcasting Code, make it clear that footage of this kind can only be used if there is good reason. Violence should not be glorified, the press code says, and the depiction of violent crime should be avoided “unless the public interest dictates otherwise”.
Public curiosity about assassinations is undoubtedly high, but it’s not the same as what the codes understand as public interest. That is defined as
information of legitimate interest or importance to citizens.
The concern about the material of this kind is less about the possibility of hampering police work, as some have argued, but about the potential harm: the pain caused to a grieving family and the offence caused to audiences by gratuitous and shocking violence. Where the value of material lies more in offering grisly entertainment than in its news value, publication becomes questionable.
The duty to shock
Editors do sometimes decide that disturbing, graphic images can be used. Examples include photographs of assassinated South African Communist Party leader Chris Hani, of a Mozambican man set alight in xenophobic violence in South Africa in 2008 or the footage of the police killing of George Floyd in the US.
Journalists argue there is sometimes a positive obligation to show unpleasant realities. Kelly McBride, vice-president of the US nonprofit media institute Poynter Institute, says some images may have the “power to galvanise the public”, adding:
it’s irresponsible for a news organisation to shield its audience from hard truths.
However, much depends on context and the handling of the images. Responsible editors will include audience advisories so they can opt to avoid the image. Some effort to provide names and other details can help to humanise the victims, evoking more human empathy than simple ghoulish fascination.
In the case of the AKA and Tibs murders, most South African mainstream publishers seem to have taken the view that the circumstances did not justify the publication of the actual shooting. Most simply reported the existence of the footage.
But no such restraint was shown on social media. Fascinated by the sensational murder of a music star, users shared the footage in their tens and hundreds of thousands.
Clearly, professional codes and mechanisms are powerless against a truly viral phenomenon of this sort. The Press Council and the Broadcasting Complaints Commission of South Africa handle complaints against mainstream media, but they have no authority over the wider public on social media.
There is increasing concern about the spread of harmful content on social media platforms – not just gratuitous violence, but also hate speech, misinformation and much else. Several governments are developing legislation to fight toxic content. But the UN High Commissioner for Human Rights, among others, has voiced concern that the laws may be a pretext to act against dissent.
Peggy Hicks, director of thematic engagement at UN Human Rights, says:
Some governments see this legislation as a way to limit speech they dislike and even silence civil society or other critics.
The social media giants themselves –such as Twitter, Google and Facebook – have emphasised that they are not publishers, simply offering a platform for sharing and, therefore, don’t have to take responsibility. However, they increasingly accept the need for content moderation.
Machines are necessary to cope with the sheer volume of material. But human content moderators have a critical role as artificial intelligence is not always smart enough to deal with complex contexts and linguistic nuance, as emerged in leaks from inside Facebook. Moderators in their thousands have the unenviable task of sifting through a vast and unending flood of truly terrible material, from decapitation to child porn.
The United Nations Educational, Scientific and Cultural Organisation (Unesco) is looking into the regulation of social media platforms. A draft set of guidelines emphasises the need for platforms to have policies based on human rights and to be accountable.
Fundamentally, the platforms’ algorithms operate on a logic of rewarding traffic, which needs to be tempered with considerations of the common good. According to Unesco:
The algorithms integral to most social media platforms’ business models often prioritise engagement over safety and human rights.
Gossip sites in sensationalist feeding frenzy
In the example of the AKA video, sensationalist gossip sites also traded on and drove much of the traffic. A Google search for mentions of the video is dominated by obscure sites using poor language, for whom the video is simply clickbait. Their business model relies on bulk traffic to earn advertising income, and that in turn relies on the platform giants’ algorithms.
That, perhaps, is the most important lesson of the uncontrollable spread of the AKA video: ways need to be found to write elements of information ethics into the platforms’ algorithms. It is deeply damaging to social cohesion to have machine logic systematically boosting the worst and most disturbing material.
Franz Krüger, Associate researcher, University of the Witwatersrand
This article is republished from The Conversation under a Creative Commons license. Read the original article.