blog

Te Kete o Karaitiana Taiuru (Blog)

Gemini generated image. Prompt is create an image of artificial intelligence of ethics with different cultured people around a table.

Governing in the Age of Artificial Intelligence

Artificial intelligence is no longer an emerging technology on the horizon, it is here, transforming industries, reshaping workforces, and fundamentally altering how organisations create value. For New Zealand boards, AI represents both a profound opportunity and a significant governance challenge. Directors who treat AI as purely a technical matter, something to be delegated to the IT department or the CTO will risk abdicating one of their most important contemporary responsibilities.

This essay sets out the key issues that New Zealand boards must actively consider as AI becomes embedded in organisational strategy and operations. The issues span strategy, risk, ethics, legal compliance, people, and the integrity of board decision-making itself. While some of these considerations are universal, others have a distinctly New Zealand character, shaped by local regulation, the Treaty of Waitangi, the scale of our businesses, and our position in the global economy.

 

Strategic Oversight and Competitive Positioning

The most immediate question for any board is whether the organisation’s AI strategy is coherent, ambitious, and properly resourced. AI is not a one-off technology investment; it is a capability that requires sustained commitment. Boards must satisfy themselves that management has a clear view of where AI can create competitive advantage, which processes or decisions should be augmented or automated, and what the organisation’s AI maturity roadmap looks like.

For many New Zealand businesses, the strategic question is not whether to adopt AI but how fast to move. Early movers in sectors such as agritech, financial services, healthcare, and professional services are already deploying AI to improve productivity and customer outcomes. Boards that are slow to engage risk presiding over organisations that fall behind both domestic and international competitors.

At the same time, boards must guard against AI investment driven by hype rather than value. Directors should ask hard questions: What problem does this AI solution solve? How will success be measured? What are the switching costs if this technology does not perform as expected? Strategic oversight means neither reflexively embracing nor dismissing AI, but interrogating it with the same rigour applied to any major capital allocation decision.

 

Risk Governance

Identifying AI-Specific Risks

AI introduces categories of risk that boards may not have historically encountered. These include model risk,  the possibility that an AI system produces incorrect, biased, or harmful outputs at scale as well as risks associated with data quality, vendor dependency, and system opacity. Unlike traditional software, AI models can behave unpredictably when they encounter data patterns outside their training distribution, making it essential that boards understand how AI systems are monitored and when human oversight is triggered.

Cybersecurity risk also takes on new dimensions in an AI context. Adversarial attacks where bad actors subtly manipulate inputs to cause AI systems to behave incorrectly to represent an emerging threat. Boards should ensure that AI systems handling sensitive decisions are subject to robust security testing and that the organisation’s cyber risk framework has been updated to reflect AI-related exposures.

Third-Party and Supply Chain AI Risk

Many New Zealand organisations will use AI through third-party vendors rather than building models in-house. This does not eliminate governance responsibility. Boards must ensure that management has conducted thorough due diligence on AI vendors, including understanding how vendor models are trained, what data is used, and where data is stored and processed. Given that many leading AI providers are headquartered in the United States or China, data sovereignty and cross-border data transfer risks deserve particular attention.

Concentration and Systemic Risk

There is also an emerging systemic risk worth monitoring: as New Zealand organisations increasingly rely on a small number of large AI platforms, concentration risk builds across the economy. Boards of larger organisations and those in critical infrastructure should consider what happens if a major AI provider experiences an outage, a significant model failure, or a regulatory shutdown.

 

Ethics, Fairness, and Algorithmic Bias

Perhaps no aspect of AI governance requires more careful board attention than ethics. AI systems can perpetuate and amplify existing societal biases if they are trained on historical data that reflects discrimination. In the New Zealand context, this risk has particular salience given the enduring inequities experienced by Māori and Pacific peoples across health, education, employment, and the justice system. An AI system trained on historical hiring, lending, or healthcare data may systematically disadvantage these populations if bias is not actively identified and mitigated.

Boards should require management to articulate an AI ethics framework that goes beyond compliance. This includes commissioning independent bias audits of AI systems that affect customers or employees, establishing clear escalation pathways for ethics concerns, and ensuring that affected communities, including Māori to have meaningful input into AI decisions that affect them. Organisations that adopt AI without this lens risk causing real harm to vulnerable people and exposing themselves to significant reputational and legal consequences.

The question of explainability is closely related. Where AI is used to make or inform consequential decisions such as credit approvals, insurance assessments, hiring, diagnostic recommendations, individuals affected by those decisions may reasonably expect to understand the basis on which they were made. New Zealand’s existing obligations under the Privacy Act 2020 and the Human Rights Act 1993 provide some framework, but AI governance often requires boards to go further and ask whether explainability is achievable and whether the organisation is prepared to stand behind algorithmic decisions.

 

Te Tiriti o Waitangi and Māori Data Sovereignty

New Zealand boards operate in a distinctive constitutional context. Te Tiriti o Waitangi creates obligations that extend into data and technology governance in ways that many boards have not yet fully grappled with. The concept of Māori data sovereignty, the right of Māori to govern the collection, ownership, and application of data about Māori communities is increasingly recognised in policy and law.

Boards should consider whether AI systems that use or generate data relating to Māori communities do so in a manner consistent with Treaty principles of partnership, participation, and protection. This is not merely a compliance question: it is a question of whether the organisation’s use of AI honours its obligations to its Māori stakeholders and the communities in which it operates. Organisations in sectors such as health, education, and government services face particularly significant obligations in this area.

Some iwi and Māori organisations are developing their own AI capabilities and governance frameworks. Boards of mainstream organisations should be open to genuine partnership in this space rather than assuming that existing models of AI development and deployment are appropriate for Māori contexts.

 

Legal and Regulatory Compliance

New Zealand’s regulatory framework for AI is evolving rapidly. The Privacy Act 2020 already imposes significant obligations on how organisations collect, use, and protect personal information, and AI systems that process personal data must comply with these requirements. The Act’s principle around automated decision-making, though not as prescriptive as the European Union’s General Data Protection Regulation, is an area boards should monitor as case law and regulatory guidance develop.

Internationally, the regulatory environment is moving quickly. The EU AI Act, which came into force in 2024, imposes stringent requirements on high-risk AI systems and will have extraterritorial effect on New Zealand organisations that operate in or sell to European markets. Directors of organisations with offshore operations or customers should ensure that management is tracking international regulatory developments and building compliance capabilities ahead of enforcement deadlines.

Consumer law is another area of exposure. The Fair Trading Act 1986 prohibits misleading conduct, and AI-generated content that creates false impressions whether in marketing, product descriptions, or customer service interactions could expose organisations to liability. Boards should ensure that policies governing AI-generated content are in place and that there is accountability for reviewing outputs before they reach customers or the public.

Intellectual property questions raised by generative AI are also unresolved in many jurisdictions, including New Zealand. When AI systems are trained on copyrighted material and generate outputs that resemble or reproduce that material, significant IP risk can arise. Boards should seek legal advice on the organisation’s exposure and ensure that AI procurement and usage policies address IP ownership and indemnification.

 

People, Workforce, and Culture

AI will change the nature of work in virtually every organisation. For boards, the workforce implications of AI are both a risk and a responsibility. On the risk side, failure to invest in workforce capability can leave organisations unable to realise the value of AI investments. On the responsibility side, boards have obligations to employees whose roles may be disrupted or eliminated by automation.

Boards should be engaging with questions such as “What is the organisation’s position on using AI to reduce headcount”, and “how does this align with its values and obligations to employees?” “What investment is being made in retraining and upskilling, particularly for workers in roles most vulnerable to automation?” “How will the organisation manage the transition in a way that is fair and transparent?”

Culture is equally important. For AI to be adopted effectively, employees must trust it. That trust is built through transparent communication about how AI is being used, what decisions it informs, and what safeguards are in place. Boards should ask management how employee feedback on AI is being gathered and acted upon, and whether there is a safe channel for raising concerns about AI systems.

In New Zealand, where many organisations are relatively small and employment relationships are often more personal than in larger economies, these people considerations carry particular weight. Boards that are seen to deploy AI cynically to extract short-term cost savings at the expense of workers risk lasting damage to their social licence and their ability to attract talent.

 

Board Capability and Governance Processes

Boards do not need every member to be an AI expert. But they do need sufficient collective literacy to exercise genuine oversight rather than simply ratifying management decisions. This may require investment in board education and, in some cases, adding directors with relevant AI or technology backgrounds. It may also mean engaging external advisers, AI governance specialists, ethicists, or technical auditors to supplement board expertise.

Governance processes also need to adapt. Board reporting frameworks should be updated to include AI-specific metrics. The performance of material AI systems, incidents and near-misses, bias audit findings, regulatory developments, and the progress of AI-related projects against milestones and budgets. Risk committees should include AI risk on their standing agendas. And boards should establish clear delegations of authority for AI investment and deployment decisions, distinguishing between decisions that can appropriately be made by management and those that warrant board-level approval.

There is also a question about AI’s role in board decision-making itself. Some organisations are beginning to use AI tools to synthesise board papers, model strategic scenarios, or identify patterns in financial data. Boards should approach this with both openness and caution, ensuring that AI tools used in the boardroom are subject to the same governance principles applied to AI deployed elsewhere in the organisation.

 

Sustainability, Social Licence, and Reputational Considerations

AI has significant environmental implications that boards are increasingly expected to address. Training large AI models consumes enormous quantities of energy and water, and the growing use of AI inference at scale has a substantial carbon footprint. New Zealand organisations that have made public commitments to sustainability should consider how AI usage aligns with those commitments, and whether they are factoring AI’s environmental costs into their sustainability reporting.

Social licence is a broader consideration. New Zealanders have high expectations of the organisations they trust with their data and their livelihoods. Organisations that use AI in ways that feel opaque, exploitative, or unfair risk a swift and serious public backlash. Boards should ensure that the organisation’s external communications about AI are honest, that its AI commitments are backed by genuine governance, and that there are credible mechanisms for external accountability.

 

Conclusion

AI governance is one of the defining challenges of contemporary board service. For New Zealand directors, it requires integrating an understanding of rapidly evolving technology with the enduring responsibilities of stewardship, ethics, and accountability that sit at the heart of good governance.

The issues outlined in this essay include strategic oversight, risk governance, ethics and bias, Te Tiriti obligations, legal compliance, people, board capability, and social licence. Though it is not a checklist to be ticked and forgotten. They require ongoing engagement, honest self-assessment, and a willingness to ask hard questions of management and of the board itself.

The organisations that navigate AI well will be those whose boards engaged early, took their responsibilities seriously, and ensured that the pursuit of AI-driven value was matched by an equivalent commitment to doing so responsibly. The stakes for their organisations, their stakeholders, and New Zealand society more broadly could scarcely be higher.

Disclaimer: This essay is intended for governance and educational purposes. It does not constitute legal advice.

DISCLAIMER: This post is the personal opinion of Dr Karaitiana Taiuru and is not reflective of the opinions of any organisation that Dr Karaitiana Taiuru is a member of or associates with, unless explicitly stated otherwise.

Archive