Draft national AI policy: What it means and what to do now
By Industry Contributor 21 April 2026 | Categories: news
By Peter Grealy, Leanne Mostert, Wendy Tembedza, Karl Blom & Aalia Manie, Partners at Webber Wentzel
The publication of South Africa's Draft National AI Policy (Draft Policy) marks a turning point for organisations that develop, deploy or rely on artificial intelligence (AI). Beyond signalling the emergence of formal AI regulation, the Draft Policy introduces new expectations around governance, ethics, accountability and sector-specific oversight.
The Draft Policy, published by the Minister of Communications and Digital Technologies (DCDT), states its underlying policy imperative as the development of a comprehensive, inclusive and ethically grounded national policy that ensures responsible innovation, protects the public interest and advances socio-economic transformation.
The public comment process is open until 10 June 2026 and presents a critical opportunity for businesses to influence how AI regulation will apply in practice. Organisations that engage early can help ensure that the final policy reflects real-world use cases, operational constraints and innovation imperatives.
The six strategic pillars: the framework against which to align your AI governance
The Draft Policy identifies six strategic pillars that are central to the development of the final policy, and which will guide sectoral approaches to AI regulation:
- Responsible governance - ensuring the safety and security of data assets, maintaining privacy and data protection, and fostering responsible AI development and use.
- Ethical and inclusive AI - fostering the development of ethical AI guidelines and ensuring equitable AI deployment.
- Capacity and talent development - ensuring that South Africa has a robust AI talent pool, advancing technological capabilities and driving innovation.
- AI for inclusive growth and job creation - advancing technological capabilities and enabling startups and MSMEs to leverage AI technologies effectively.
- Human-centred deployment - maintaining human oversight over AI, building public trust in AI and enhancing government efficiency through the use of AI.
- Cultural preservation and international integration - ensuring alignment between AI development and societal values and positioning South Africa as a regional AI leader.
Why sector-specific stakeholder input matters
The Draft Policy recognises the complexity of regulating a technology with a wide range of use cases and adopts a sector-specific approach to AI regulation. While referenced in the Draft Policy, it appears that a unifying "AI Act" is unlikely to emerge in the near future. Rather, the Draft Policy:
- provides general guidance through core pillars, including fairness, transparency, accountability, inclusivity, confidentiality, human autonomy and reliability.
- acknowledges the need for involvement from industry and subject-matter experts with an understanding of sector-specific dynamics; and
- recognises that stakeholder input across sectors, including financial services, manufacturing, healthcare, energy, infrastructure, transport, and trade and ICT, will be integral to ensuring that the final policy reflects sectoral nuances. Organisations operating in these sectors should begin identifying the AI use cases most relevant to their operations and formulating sector-specific positions for inclusion in their submissions.
The Draft Policy's sector-specific approach means that regulatory expectations are likely to be similar, but not uniform. Organisations in financial services, healthcare, energy, manufacturing, infrastructure and ICT should expect differentiated compliance obligations reflecting the risk profile and societal impact of AI use within their sectors. To some extent, this represents an evolution of existing regulatory frameworks. Organisations should begin mapping their current AI deployments against the provisions of the draft policy contemplated in the Draft Policy to identify where heightened obligations are likely to apply.
Multi-regulatory oversight: Preparing for a new compliance landscape
The Draft Policy envisages that the role of existing regulatory bodies will evolve to facilitate a cross-sectoral oversight model. This model will bring together ICASA (digital infrastructure and broadcasting), the Information Regulator (data privacy), the Competition Commission (digital market fairness) and financial regulators (including SARB, FSCA, CSIR and DTIC in relation to AI in fintech) to coordinate oversight, standards and ethical guidance.
The nature of this oversight is intentionally left open-ended to enable stakeholders with sector knowledge to provide input on scope and implementation.
The Draft Policy also contemplates the creation of new regulatory bodies:
- a national AI commission (national AI office), intended to coordinate policy refinement and implementation with input from government, industry and civil society;
- an AI ethics board, intended to enforce ethical governance relating to bias, privacy and fairness, incorporating inputs from company-based ethics boards in the ongoing review of guidelines;
- an AI regulatory authority, responsible for monitoring compliance, conducting audits, issuing certifications and undertaking human rights and gender impact assessments;
- an AI ombudsperson office, enabling individuals to challenge AI-driven decisions and seek redress;
- an AI insurance superfund, modelled on the Road Accident Fund, to compensate individuals or entities harmed by AI-driven outcomes where liability is difficult to determine; and
- a national AI safety institute; working with international bodies to advance AI safety and develop and disseminate guidelines for AI safety as part of a risk-mitigation approach to regulation.
For regulated entities, the introduction of new oversight bodies, together with increased coordination between existing regulators, signals a future in which AI oversight cuts across traditional regulatory regimes. This increases the importance of consistent governance, documentation and internal controls. Organisations should prioritise establishing a consolidated AI governance framework and appointing a senior official accountable for AI compliance, rather than maintaining siloed compliance processes for each regulator.
Stakeholders are encouraged to provide input on how regulatory oversight should be structured to support both innovation and efficiency. Submissions that propose concrete, sector-specific definitions of high-risk AI use cases, identify gaps in the proposed institutional framework or offer practical compliance models are likely to carry greater weight. Generic objections without workable alternatives are less likely to influence the final policy.
Phased approach
The Draft Policy contemplates a phased implementation approach:
- Year 1 (2025/26) - finalisation of the policy, identification and publication of key draft regulatory requirements addressing unacceptable risks, and initiation of national AI policy guidelines.
- Year 2 (2026/27) - publication of guidelines, implementation of requirements for high-risk use cases, identification of requirements for medium- and low-risk use cases, and development of sectoral AI strategies.
- Year 3 (2027/28) - implementation of outstanding regulatory instruments and policy interventions, considering emerging AI trends.
While implementation is phased, organisations should not treat Year 1 purely administrative. The finalisation of the policy and the guidance already contained in the Draft Policy will shape future compliance expectations, procurement decisions and AI investment strategies.
Organisations should use this period to assess AI risk exposure across all business units; evaluate governance frameworks against Draft Policy principles; review contracts with foreign AI providers to ensure accountability and transparency obligations can be met; engage with industry bodies and sector regulators to coordinate submissions; and prepare for the evolving regulatory landscape.
Webber Wentzel and its applied AI subsidiary Fusion, support clients in preparing public submissions, assessing AI risk exposure, designing governance frameworks, aligning with evolving regulatory expectations and embedding responsible AI practices across the organisation.
Most Read Articles

Have Your Say
What new tech or developments are you most anticipating this year?

