As we move through 2026, the boardroom conversation has shifted from “how do we govern AI” to “how can AI help us govern”. The rise of Agentic AI – systems capable of independent planning, executing multi-step tasks, and making autonomous decisions – has introduced a sophisticated new layer of liability for Australian directors.
While Generative AI acts like a high-speed intern, Agentic AI acts more like a digital executive. The Australian Institute of Company Directors (AICD) has recently released early insights into these trends, emphasizing that while AI may enhance deliberations, it must not undermine management confidence or blur lines of accountability.
1. Why Agentic AI Changes the Liability Equation
Unlike reactive chatbots, autonomous agents can access sensitive databases and interact with customers without human-in-the-loop triggers. ASIC has explicitly warned that these systems compound risk due to their capability to independently plan and act.
- The Accountability Gap: When an autonomous agent makes a decision resulting in harm, “the AI did it” is not a legal defence. ASIC and AICD reaffirm that judgement, accountability, and responsibility remain with directors.
- s180 Duty of Care: Technology does not dilute fiduciary duty. Directors must not rely on AI-generated summaries as a substitute for their own interrogation of board papers. Passive oversight of an autonomous system may be viewed by regulators as a failure to act with appropriate care and diligence.
2. The Red Flags of “Shadow AI”
The AICD has identified a two-speed dynamic where individual director use of AI is often informal and collective board use is lagging.
- Shadow AI Risks: When staff use unsanctioned tools without approval, it creates significant data and compliance risks. Directors could end up governing what they cannot see.
- The Human-in-the-Loop Check: The AICD flags a major red flag when AI outputs are not verified by a human – for example, using AI to generate board minutes without rigorous review.
3. Strategic Must-Haves for 2026 Boards
To mitigate liability, boards should adopt a framework across five domains: Oversight, Wisdom, Strategy, ESG, and Resilience.
The Stewart & Smith Verdict
Governance in the age of Agentic AI is no longer episodic, it must be continuous. The AICD warns that boards failing to grasp these distinctions risk ceding control to systems they do not fully understand.
The Bottom Line: If your Board is delegating decisions to AI, you must ensure your governance framework can withstand the scrutiny of a post-incident audit.
Did you find these insights valuable? Follow Stewart & Smith Advisory for more expert guidance on navigating the complexities of business finance.
