South Africa AI Regulation: Why a 2027 Policy May Be Too Late
South Africa's AI 2027 policy may be too late. Image:Unsplash

Home » South Africa’s AI Regulation: Why a 2027 policy may be too little too late

South Africa’s AI Regulation: Why a 2027 policy may be too little too late

South Africa won’t have a formal AI policy until 2027. Its “middle-of-the-road” approach may leave citizens’ data and rights unprotected.

28-02-26 17:44
South Africa AI Regulation: Why a 2027 Policy May Be Too Late
South Africa's AI 2027 policy may be too late. Image:Unsplash

The world’s governments seem to have decided the best way to regulate artificial intelligence is to ask it nicely to behave. Optimistic, much? 

On February 19, 2026, at the India AI Impact Summit in New Delhi, over 250,000 citizens pledged to use artificial intelligence ethically. India claimed a Guinness World Record for the achievement. 

What Prime Minister Narendra Modi unveiled was a set of five AI governance principles called the MANAV Vision, an acronym drawn from the Sanskrit word for “human.” The summit was organised around seven “Chakras of Action.” Eighty-nine countries signed the Delhi Declaration.

Not one clause of it is enforceable. 

What 250,000 pledges accomplish

This matters because the pledges, the principles, and the declaration represent a specific choice India made about how to handle AI. After the European Union passed its binding AI Act in 2024, India explicitly rejected that model. Delhi opted for what officials called “flexible guardrails over rigid compliance.” 

The United States, under the Trump administration, chose a similar path by revoking Biden-era AI executive orders and favouring voluntary industry commitments.

The result is a growing global consensus, not on how to regulate AI, but on how to avoid regulating it altogether. The vocabulary du jour, moral rather than legal. Countries speak of “ethical frameworks” and “values-based approaches” and “human-centric design.” 

At Harvard this semester, students are taking a course called “Mindfulness, AI, and Ethics: Cultivating the Heart of the Algorithm,” which applies Buddhist principles of awareness to artificial intelligence.

At the National Religious Broadcasters convention in the United States this month, Christian scholars called for moral frameworks as AI reshapes human relationships.

These discussions echo recent efforts to integrate faith-based perspectives into the global AI discourse, such as the Rome Call for AI Ethics, which emphasizes accuracy, privacy, and human dignity.

These are all serious people asking serious questions. But a question isn’t regulation. And a pledge is far from a law.

The Pentagon war 

While South Africa is still searching for its best practices manual, it appears spirited global conversation around AI ethics has hit a cold, hard realpolitik wall. Indeed, US Secretary of War Pete Hegseth has decided to burn the guide belonging to Anthropic.

In a high-stakes standoff that has rocked the tech world, the Pentagon is laying siege to Anthropic’s Constitutional AI (CAI). Instead of humans constantly checking its work, Anthropic’s Claude uses CAI to police itself. It follows a written set of rules designed to keep AI “helpful, honest, and harmless.”

Hegseth has publicly dismissed Anthropic’s “red lines” on mass surveillance and autonomous killing as nothing more than “woke AI” and the byproduct of an “Ivy League faculty lounge” mentality. 

To the Department of War, ethical guardrails aren’t safety features. They are “DEI infusions” that prevent America from winning.

Then Hegseth threw his toys

The standoff reached a breaking point on Friday, February 27, 2026, as the Pentagon’s deadline passed without compromise. Anthropic refused to “delete its conscience,” standing firm against Defense Secretary Pete Hegseth’s demands for “unfettered access” to the Claude model for “all lawful purposes.”

CEO Dario Amodei maintained that the company would not waive its prohibitions on domestic mass surveillance or the use of AI in fully autonomous lethal weapons, effectively telling the state that a government check does not buy total control over a developer’s ethical code.

The fallout was immediate, exposing the ultimate fragility of this partnership. In an unprecedented move, Hegseth officially designated Anthropic as a “supply-chain risk” to national security, a label typically reserved for foreign adversaries. It’s a “nuclear option” effectively barring any federal contractor from doing business with the company.

With Anthropic’s ousting, the landscape shifted instantly. OpenAI stepped into the vacuum, announcing a fresh deal with the Pentagon to supply AI for classified military networks. CEO Sam Altman, however, was quick to note that OpenAI’s agreement still includes the same safety guardrails regarding surveillance and autonomous force that sparked the initial fracas.

For nations such as South Africa, the lesson is clear: if a multi-billion-dollar Silicon Valley firm can’t protect its “red lines” against a government mandate, a “middle-of-the-road” policy framework is perhaps a little underpowered for the digital hurricane.

The gaps in South Africa’s AI regulation

South Africa holds a peculiar position in this global conversation. It has neither India’s aspirational framework nor the EU’s binding rules. What it has is a gap.

The government confirmed in February that its national AI policy won’t be finalised until the 2026-2027 financial year. When it arrives, Communications Deputy Director-General Alfred Mmoto told Parliament, it will not be standalone legislation. 

It will be a “sector-specific, risk-based approach” layered onto existing laws. Mmoto said the department shares concerns about the EU model and is pursuing a “middle-of-the-road approach.”

Here is what that middle of the road looks like in practice. The Protection of Personal Information Act (POPIA) is the only binding South African law that touches AI. Section 71 covers automated decision-making. And that is the entire architecture.

So, as of now: no algorithmic audits. Also, no AI ombudsperson. And no mandatory impact assessments. Oh, and no transparency requirements for AI systems used in hiring, lending, or policing. Cape Town-based corporate law firm Michalsons confirmed in January that the upcoming policy will serve as “the foundation for regulating AI.” 

While the National AI Policy Framework proposes mandatory impact assessments as “strategic pillars,” there is currently no legally binding requirement for companies to perform audits or impact assessments. These remain “best practice” recommendations rather than statutory obligations.

South Africa has opted for a “multi-regulator model” rather than a single standalone AI regulator or ombudsperson. This means AI governance will be embedded within existing frameworks like the Information Regulator (for privacy) or the Competition Commission, rather than a dedicated AI oversight body.

Real-world risks of delayed AI regulation in South Africa

This is not a future problem. Automated decision-making systems are operating in South African financial services, telecommunications, and human resources right now. 

When a large language model generates false information about a South African citizen, that person has no dedicated mechanism for recourse. When an algorithm rejects a loan application based on biased training data, POPIA’s Section 71 offers a narrow right to challenge the decision but no requirement that the institution explain how the algorithm reached it.

The global debate about “mindful AI” and “ethical guardrails” acquires a different flavour when you live in a country where the guardrails are hypothetical. 

India’s MANAV Vision is non-binding, but India at least convened 500 AI leaders from 100 countries to articulate what its principles are. South Africa has a draft framework moving through cabinet clusters and a 60-day public comment period expected sometime in March.

Why the moral language spreads

The turn toward ethical vocabulary is not accidental. It reflects a real and growing conundrum. AI systems are developing faster than legislative process, and the EU’s binding approach took years to negotiate. For governments without Brussels’ regulatory capacity, moral language fills the space where law would normally go.

But filling a space and securing it are different things. When accountability becomes a matter of corporate ethics rather than enforceable law, the question of who defines “ethical” becomes the question. Tech companies operating in South Africa are, for now, left to answer that question themselves.

South Africans have lived through versions of this before. The country has experience with institutions that substitute moral ambition for legal mechanism, and with the distance that opens between the two over time. The specifics may differ. The pattern though, is familiar. Grand language does its best work when it sits on top of binding rules, not in place of them.

What a 60-day comment period is worth

Mmoto’s timeline puts South Africa’s AI policy completion somewhere around early 2027. By then, the algorithms shaping credit decisions, job screenings, and content moderation in South Africa will have operated for too long without dedicated oversight. The public comment period expected in March is a chance for civil society, business, and ordinary citizens to shape what “middle-of-the-road” looks like.

Whether that window produces regulation with teeth or another set of aspirational principles will say a great deal about what South Africa learned from watching the rest of the world choose between law and language.

India set a world record for promises. South Africa’s record must be written in policy.