blog

Te Kete o Karaitiana Taiuru (Blog)

Responsible AI in New Zealand

New Zealand has developed a comprehensive suite of AI governance instruments, including the Algorithm Charter (Stats NZ, 2020), Privacy Act 2020, Digital Identity Services Trust Framework Act 2023, Biometric Processing Privacy Code 2025 (Office of the Privacy Commissioner, 2025), and responsible AI guidance for both public and private sectors (Ministry of Business, Innovation and Employment, 2025a, 2025b; New Zealand Government Chief Digital Office, 2025). Despite this regulatory density, Māori (the Indigenous Peoples of New Zealand) continue to face systemic harms from AI systems that formally comply with existing frameworks whilst shifting power away from Māori communities, expanding surveillance infrastructure, and embedding disproportionate error burdens into essential services. This paper argues that the fundamental gap is not additional principles or voluntary guidance, but rather enforceable Māori governance across the entire AI lifecycle. Drawing on Treaty of Waitangi jurisprudence, particularly the Waitangi Tribunal’s landmark Wai 262 (Waitangi Tribunal, 2011) and Wai 2522 (Waitangi Tribunal, 2020, 2021) inquiries, the paper demonstrate that Treaty consistent AI governance requires substantive decision rights, independent auditability, and effective redress mechanisms that function in practice not merely transparency statements. I propose five testable requirements for Treaty consistent AI and argue that technical governance frameworks that ignore Māori’ constitutional relationships constitute incomplete and ultimately harmful governance.

Keywords: Indigenous governance, Treaty of Waitangi, responsible AI, algorithmic accountability, digital sovereignty, biometric systems, Māori data sovereignty.

I. Introduction

The global discourse on responsible artificial intelligence has increasingly emphasised principles based frameworks, transparency mechanisms, and voluntary governance instruments. New Zealand exemplifies this trend, having developed what appears to be a comprehensive AI governance stack including the Algorithm Charter for Aotearoa New Zealand (Stats NZ, 2020), the Privacy Act 2020, the Digital Identity Services Trust Framework Act 2023, the Biometric Processing Privacy Code 2025 (Office of the Privacy Commissioner, 2025), responsible AI guidance for public sector generative AI adoption (New Zealand Government Chief Digital Office, 2025), a national AI strategy (Ministry of Business, Innovation and Employment, 2025a), and business focused responsible AI guidance (Ministry of Business, Innovation and Employment, 2025b).

However, this regulatory density has not prevented systematic harms to Māori, the Indigenous Peoples of New Zealand. AI systems continue to exhibit a predictable pattern: formal compliance with existing frameworks whilst simultaneously shifting power away from Māori communities, expanding identity linked surveillance infrastructure, implementing risk scoring systems that disproportionately affect Māori, and embedding unequal error burdens into essential public services. This gap between regulatory appearance and lived reality reveals a fundamental flaw in contemporary AI governance approaches.

This paper argue that the core problem is not insufficient principles or inadequate voluntary guidance, but rather the absence of enforceable Māori governance across the AI lifecycle. Treaty consistent AI governance grounded in Te Tiriti o Waitangi (the Treaty of Waitangi), New Zealand’s foundational constitutional agreement (Treaty of Waitangi Act 1975) requires substantive decision rights, independent auditability with culturally relevant evaluation metrics, and redress mechanisms that function effectively in practice rather than merely on paper.

This paper makes three primary contributions. First, I contextualise New Zealand’s AI governance landscape within the constitutional framework of Te Tiriti o Waitangi, explaining its significance for international audiences and demonstrating why Māori governance is not merely an add on consideration but a constitutional requirement. Second, we analyse the structural limitations of voluntary and principles based AI governance frameworks when applied to contexts involving Māori Peoples’ rights, drawing on key Waitangi Tribunal inquiries (Waitangi Tribunal, 2011, 2020, 2021). Third, this paper proposes five concrete, testable requirements for Treaty consistent AI governance that move beyond transparency to substantive power sharing.

II. Constitutional Context: Te Tiriti o Waitangi and Māori Governance Rights

1 Te Tiriti o Waitangi The Treaty of Waitangi

For international readers unfamiliar with New Zealand’s constitutional framework, Te Tiriti o Waitangi (the Māori language text) and the Treaty of Waitangi (the English language text) were signed in 1840 and are widely regarded as New Zealand’s foundational constitutional agreement between Māori chiefs and the British Crown. Both texts are reproduced in Schedule 1 of the Treaty of Waitangi Act 1975, and significant textual differences exist between the Māori and English versions, particularly regarding the concepts of sovereignty, governance, and authority (Waitangi Tribunal, n.d.).

These textual differences are not merely historical curiosities; they have profound implications for contemporary governance, including AI systems. The Māori text of Te Tiriti guarantees Māori chiefs tino rangatiratanga (absolute chieftainship/authority) over their lands, villages, and treasures (taonga), whilst the English version refers to “full exclusive and undisturbed possession” of properties. Modern jurisprudence has increasingly recognised that taonga encompasses not only physical property but also cultural knowledge, language, identity, and increasingly, data about Māori communities (Waitangi Tribunal, 2011).

2 The Waitangi Tribunal and Wai Inquiries

The Waitangi Tribunal is a standing commission of inquiry established by the Treaty of Waitangi Act 1975. It investigates claims that the Crown has breached the Treaty or Te Tiriti. Each claim or inquiry is assigned a Wai number, a catalogue identifier for tracking purposes. Two Wai inquiries are particularly salient for contemporary AI governance:

Wai 262 (Ko Aotearoa Tēnei): Released in 2011 after a 20 year inquiry, this landmark whole of government report examined Crown law and policy affecting Māori culture and identity (Waitangi Tribunal, 2011). The Tribunal found that existing policy frameworks systematically eroded Māori authority over cultural heritage and knowledge and called for a partnership based Treaty relationship characterised by shared decision making rather than mere consultation. The report’s findings extend naturally to contemporary data and AI governance, where automated systems increasingly mediate access to services, shape identity verification processes, and make consequential decisions about Māori individuals and communities.

Wai 2522: This multi stage inquiry examines the Trans Pacific Partnership Agreement (TPPA) and Comprehensive and Progressive Agreement for Trans Pacific Partnership (CPTPP) trade agreements (Waitangi Tribunal, 2020, 2021). Later stages specifically addressed data sovereignty and Crown obligations under Te Tiriti in the context of international trade rules that constrain domestic policy space. The inquiry demonstrates that when major infrastructure decisions, including data governance frameworks are made without enforceable Treaty consistent processes, Māori are relegated to reactive redress rather than exercising shared authority from the outset.

Together, these inquiries establish that Treaty consistency requires more than consultation or cultural sensitivity. It requires shared authority, decision rights, and the ability to say “no” to systems that threaten Māori interests.

III. New Zealand’s AI Governance Landscape

1 The Algorithm Charter: Voluntary Transparency

The Algorithm Charter for Aotearoa New Zealand, launched in 2020, represents New Zealand’s flagship AI governance initiative (Stats NZ, 2020). Signatory agencies commit to transparency, preventing unintended bias, safeguarding privacy and human rights, and reflecting Treaty of Waitangi principles. The Charter embodies many international best practices for responsible AI, emphasising accountability, transparency, and human oversight.

However, the Charter is fundamentally voluntary. Agencies may choose whether to sign, and compliance mechanisms rely primarily on self reporting and peer accountability rather than external enforcement. This voluntary nature creates a predictable failure mode: under procurement pressure, budget constraints, and the rapid evolution of vendor ecosystems, commitments fracture. Where AI systems are deployed for eligibility triage, enforcement prioritisation, fraud analytics, or customer risk flagging, affected individuals often cannot realistically opt out of the system, yet the systems themselves remain subject only to voluntary governance commitments.

This asymmetry, mandatory subjection to algorithmic decision making paired with voluntary governance obligations is particularly harmful for Māori communities. When consultation occurs, it typically happens after system design, treating Māori as stakeholders to be informed rather than Treaty partners with decision rights. The result is visibility without authority: Māori communities receive explanations of how systems work but lack power to determine whether and how automated systems should be used in domains that function as exercises of public power.

2 Privacy Act 2020 and Biometric Processing Privacy Code 2025

The Privacy Act 2020 (Privacy Act 2020) provides essential baseline protections for personal information across both public and private sectors in New Zealand. It establishes information privacy principles governing collection, use, disclosure, access, and correction of personal information. The Act applies to automated decision making systems and requires that individuals affected by automated decisions be given information about the decision and have meaningful opportunities to challenge it.

Building on this foundation, the Biometric Processing Privacy Code 2025 (Office of the Privacy Commissioner, 2025) represents a significant regulatory development. Issued in July 2025, it came into force on 3 November 2025, with a transition window for existing biometric processing extending until 3 August 2026. The Code strengthens protections around biometric data collection, storage, use, and disclosure, imposing heightened requirements given the sensitive nature of biometric identifiers.

Whilst these instruments strengthen individual privacy protections, they do not address the Treaty questions most salient for Māori communities: When is biometric classification acceptable at all? Who decides which uses are permissible versus prohibited? What remedies exist when systems fail Māori disproportionately? How are collective and structural harms addressed?

Many harms experienced by Māori are inherently collective rather than individual: group profiling based on population level patterns, stigma from risk labels applied to communities, culturally unsafe reuse of identity linked data across government systems, and the compounding effects of algorithmic decisions across multiple life domains. These collective harms can persist even when individual level privacy principles are technically satisfied.

3 Digital Identity Infrastructure as AI Gateway

The Digital Identity Services Trust Framework Act 2023 (Digital Identity Services Trust Framework Act 2023) establishes the legal framework and governance structure for “secure and trusted” digital identity services in New Zealand. Whilst presented primarily as infrastructure for secure online transactions, digital identity systems function as de facto gateways for AI deployment: identity proofing, liveness checks, biometric verification, and fraud or risk analytics are increasingly automated and vendor mediated.

Digital identity infrastructure enables the seamless integration of AI systems into essential services accessing healthcare, proving eligibility for social services, opening bank accounts, applying for employment. Once this infrastructure is established, AI systems can be layered onto it with minimal additional friction. Section 53 of The Digital Identity Services Trust Framework Act 2023 has legislated a Māori advisory group that recognises if Māori governance rights are not embedded in the foundational infrastructure, subsequent AI deployments will inherit those governance deficits.

4 Accelerated Adoption Through Strategy and Guidance

The New Zealand government’s Responsible AI Guidance for the Public Service: GenAI (New Zealand Government Chief Digital Office, 2025) explicitly supports public sector agencies in adopting generative AI systems. Simultaneously, the Ministry of Business, Innovation and Employment’s (MBIE) New Zealand’s Strategy for Artificial Intelligence: Investing with confidence (Ministry of Business, Innovation and Employment, 2025a) and accompanying Responsible AI Guidance for Businesses (Ministry of Business, Innovation and Employment, 2025b) are explicitly adoption forward, encouraging private sector uptake alongside risk management considerations.

This dual approach public sector enablement paired with private sector acceleration means AI will diffuse most rapidly into precisely the domains where Māori individuals and communities face disproportionate risks: identity verification for access to services, hiring and employment screening, lending and credit decisions, insurance underwriting, welfare eligibility determination, and compliance monitoring systems.

The risk management frameworks accompanying these adoption strategies remain largely principles based and voluntary. If adoption accelerates faster than the hardening of enforceable Māori governance mechanisms, “responsible AI” becomes merely a label applied to systems that produce predictable, systematic harm.

IV. Why Voluntary Governance Fails Māori

1 The Consultation Theatre Problem

Contemporary AI governance frameworks emphasise stakeholder engagement and consultation processes. These are valuable when stakeholders have roughly equal power and when participation in governance processes is genuinely voluntary. Neither condition holds for Māori communities in relation to government deployed AI systems.

First, power asymmetries are structural and historical. The Crown controls procurement decisions, sets evaluation criteria, determines deployment schedules, and establishes redress pathways. Māori communities are invited to provide input, but consultation does not bind decision makers. The Wai 262 inquiry (Waitangi Tribunal, 2011) documented how this pattern of consultation without authority systematically eroded Māori control over cultural heritage for decades.

Second, participation is not voluntary when the systems in question mediate access to essential services. If an AI system determines welfare eligibility, flags individuals for investigation, or control’s identity verification for accessing government services, affected individuals cannot meaningfully opt out. Consultation frameworks that assume voluntary participation fundamentally mischaracterise the power dynamics at play.

2 Collective Harms and Individual Rights Frameworks

Privacy law and algorithmic accountability frameworks have developed primarily around individual rights: the right to access one’s data, to correct errors, to receive explanations of automated decisions, and to seek redress for individual harm. These are important protections, but they are insufficient for addressing Māori governance concerns.

Māori harms from AI systems are often inherently collective. When an algorithm learns patterns from historical data reflecting systemic discrimination, it encodes group level disadvantage. When biometric classification systems are trained on datasets lacking Māori representation, error rates differ systematically by ethnicity. When risk scoring systems flag Māori neighbourhoods for enhanced scrutiny, the stigma and burden fall on communities, not isolated individuals.

Individual rights frameworks provide no mechanism for communities to collectively refuse a system, to set binding constraints on how data about their communities may be used, or to receive collective redress when systematic bias causes widespread harm. The gap between individual rights frameworks and collective harm realities leaves Māori structurally unprotected.

3 The Speed Asymmetry

AI systems deploy faster than governance institutions can respond. Vendors release new capabilities, agencies adopt them for immediate operational gains, and affected communities discover harms only after systems are operational and embedded in practice. By the time governance processes catch up, if they do, systems are entrenched, alternatives are expensive, and political will to change course has dissipated.

This speed asymmetry is particularly harmful when paired with voluntary governance commitments. Agencies under pressure to demonstrate innovation and efficiency face weak incentives to slow adoption for governance consultations that might constrain their options. The result is a predictable cycle: rapid deployment, emergent harms, belated consultation, minimal changes, and the harms persist in slightly modified form.

V. Requirements for Treaty Consistent AI Governance

Moving from voluntary principles to enforceable Treaty consistent AI governance requires concrete, testable requirements. I propose five foundational obligations for AI systems that affect Māori communities, particularly in high impact public services.

1 Māori Decision Rights Across the AI Lifecycle

Treaty consistency requires shared authority, not consultation. For AI systems affecting Māori communities or deployed in contexts implicating Treaty interests, Māori must have decision rights at each stage of the lifecycle:

  • Problem definition: Does this problem require an AI solution? Are there alternative approaches that pose fewer risks to Māori autonomy and data sovereignty?
  • Data access and use: Which data sources are permissible? How should Māori data be governed? What uses are categorically prohibited?
  • Evaluation metrics: What constitutes acceptable performance? Which disparities are tolerable versus unacceptable? How should trade offs be resolved?
  • Deployment thresholds: Under what conditions may the system go live? What evidence is required? Who determines sufficiency?
  • Shutdown triggers: What patterns of harm or error rates justify immediate suspension? Who has authority to order shutdown?

These decision rights must be binding, not advisory. Māori governance bodies must have the authority to block deployment, require redesign, or mandate shutdown, not merely the opportunity to voice concerns that decision makers may disregard.

2 Independent Evaluation with Māori Relevant Subgroup Reporting

Standard AI system evaluations typically report aggregate performance metrics: overall accuracy, precision, recall, or calibration statistics. These aggregate measures can mask systematic differences in performance across subgroups. An algorithm may achieve 95% accuracy overall whilst performing at only 80% accuracy for Māori, with errors concentrated in particular contexts or decisions.

Treaty consistent evaluation requires:

  • Māori specific performance reporting: Error rates, false positive rates, false negative rates, and other relevant metrics calculated specifically for Māori populations.
  • Real deployment context evaluation: Testing must occur in the actual operational environment with real data and real consequences, not merely on historical benchmark datasets.
  • Independent evaluators: Evaluation must be conducted by parties independent of both the deploying agency and the vendor, with meaningful accountability to Māori governance structures.
  • Longitudinal monitoring: Evaluation cannot be a one time gate before deployment. Systems must be continuously monitored for performance degradation, model drift, or emergent disparities.

3 Vendor Bound Accountability Through Procurement

When AI capabilities are procured from commercial vendors, accountability often fragments. Deploying agencies claim they lack insight into proprietary algorithms; vendors claim they are not responsible for deployment decisions; affected communities are told that commercial confidentiality prevents disclosure.

Treaty consistent procurement must bind vendors to substantive accountability obligations:

  • Audit access: Independent evaluators designated by Māori governance bodies must have technical access to systems, including model architectures, training data, and decision logic.
  • Incident reporting: Vendors must report performance anomalies, security incidents, or detected bias to both deploying agencies and designated Māori oversight bodies.
  • Enforceable remedies: Contracts must specify concrete remedies for failures , financial penalties, suspension of use, termination of contracts , with enforcement authority distributed across multiple parties including Māori governance structures.
  • Secondary use limits: Data and insights generated through system operation must not be repurposed for other applications without explicit consent from relevant governance bodies.

4 Biometric Exclusion Zones and Anti Function Creep Rules

The Biometric Processing Privacy Code 2025 (Office of the Privacy Commissioner, 2025) provides baseline protections for biometric data. Treaty consistent governance must go further, recognising that some uses of biometric systems are categorically inconsistent with Māori cultural practices (tikanga).

Required protections include:

  • Certain contexts such as cultural sites, community gatherings, or exercise of traditional rights must be designated as biometric free zones where surveillance and identification systems are prohibited.
  • Biometric data collected for one purpose (e.g., secure authentication) must not be accessible for other purposes (e.g., law enforcement investigations or social service compliance monitoring) without explicit authorisation and independent oversight.
  • Māori governance bodies must have rights to audit biometric systems for compliance with purpose limitations, retention schedules, and security measures.
  • When biometric systems fail, misidentification, false matches, security breaches affected individuals and communities must have access to prompt, effective remedies that include both individual compensation and systemic corrections.

5 Public Register of High Impact AI Systems

Many AI systems affecting Māori communities operate with minimal public documentation. Affected individuals often do not know whether algorithmic systems influenced decisions about them, which vendor supplied the technology, or how to seek redress.

A public register of high impact AI systems should include:

  • What decisions does the system support or automate? What population is affected?
  • Which agency deploys the system? Which vendor supplies it? Who holds ultimate accountability?
  • What performance metrics have been assessed? What are error rates overall and for Māori populations specifically? When was the most recent evaluation?
  • How can affected individuals or communities challenge decisions, report harms, or seek review?
  • The register must be maintained as a living document, updated when systems change, are expanded to new contexts, or are retired.

This register serves multiple functions: it enables informed public discourse about AI deployment, provides essential information to affected individuals seeking redress, and creates accountability pressure by making system performance and governance practices visible.

VI. Implications for International AI Governance

New Zealand’s situation offers broader lessons for international AI governance, particularly in jurisdictions with Indigenous Peoples or other communities holding distinct constitutional or collective rights.

Much of the international AI governance discourse focuses on technical best practices: model documentation, performance metrics, fairness criteria, and transparency mechanisms. These are valuable, but they are insufficient when AI systems operate in contexts where Indigenous Peoples hold constitutional rights to self-determination, data sovereignty, or collective governance.

Technical governance that ignores constitutional relationships is incomplete governance. It may optimise for certain values accuracy, efficiency, user experience, whilst systematically undermining others: collective autonomy, cultural integrity, treaty based partnership. The result is systems that function well by narrow technical criteria whilst producing systematic constitutional violations.

International AI governance initiatives increasingly rely on principles based, voluntary frameworks, corporate ethics statements, multi stakeholder guidelines, industry self-regulation. New Zealand’s experience demonstrates that voluntary frameworks predictably fracture when subjected to real world pressures: procurement timelines, budget constraints, competitive dynamics, and the allure of operational efficiency gains.

This is not a failure of commitment or goodwill; it is a structural feature of voluntary governance. When compliance is optional and enforcement is weak, the short-term incentives favour rapid adoption over careful governance. For communities that cannot opt out of these systems, voluntary governance provides insufficient protection.

1 Indigenous Data Sovereignty Requires Infrastructure Governance

The international Indigenous data sovereignty movement emphasises collective rights over data about Indigenous communities. However, data sovereignty is undermined when the infrastructure through which data flow’s digital identity systems, interoperable health records, integrated government service platforms is designed without Indigenous governance from the outset.

AI governance cannot be separated from infrastructure governance. Once infrastructures are established with particular data flows, access controls, and interoperability standards, subsequent AI deployments inherit those structural features. If Indigenous governance is bolted on afterwards, it operates at a permanent disadvantage, attempting to constrain systems designed without regard for Indigenous rights.

2. Reactive Redress Is Insufficient

Many governance frameworks emphasise ex post accountability: impact assessments after deployment, bias audits when harms emerge, redress mechanisms for affected individuals. These are necessary, but they are insufficient for Indigenous Peoples whose rights include prospective and self-determination.

The Waitangi Tribunal’s Wai 2522 inquiry (Waitangi Tribunal, 2020, 2021) demonstrates the inadequacy of reactive approaches. When major infrastructural decisions, international trade agreements governing data flows are made without Treaty consistent processes, Māori are relegated to seeking redress after the fact. By that point, policy space is constrained, systems are entrenched, and reversing course is politically and economically costly.

Treaty consistent governance requires prospective participation: shared authority over whether systems should be built, not merely compensation when they cause harm.

VII. Limitations and Future Work

This paper articulates requirements for Treaty consistent AI governance but does not resolve all implementation challenges. Several questions require further research and community deliberation:

Institutional design: What institutions should hold Māori decision rights over AI systems? Should these be existing Treaty based organisations, new bodies established specifically for AI governance, or distributed authority across multiple institutions?

Scope boundaries: Which AI systems should be subject to Treaty consistent governance requirements? All systems touching Māori data? Only high impact public sector systems? How should private sector systems that affect Māori communities be addressed?

Vendor compliance mechanisms: How can requirements for audit access, incident reporting, and enforceable remedies be operationalised in procurement contracts when AI capabilities are supplied by international vendors operating across multiple jurisdictions?

Resource constraints: Implementing rigorous evaluation, independent auditing, and ongoing monitoring requires significant resources. How should these costs be distributed? What mechanisms ensure adequate resourcing for Māori governance bodies exercising these responsibilities?

These questions do not undermine the core argument that Treaty consistent AI requires enforceable Māori governance, but they do indicate substantial implementation work ahead. This paper aims to establish the constitutional foundation and outline concrete requirements; translating these into operational governance structures will require sustained engagement between government agencies, Māori governance bodies, technical experts, and affected communities.

VIII. Conclusion

New Zealand’s dense stack of AI governance instruments: the Algorithm Charter (Stats NZ, 2020), Privacy Act 2020 (Privacy Act 2020), Digital Identity Services Trust Framework Act 2023 (Digital Identity Services Trust Framework Act 2023), Biometric Processing Privacy Code 2025 (Office of the Privacy Commissioner, 2025), GenAI guidance (New Zealand Government Chief Digital Office, 2025), and national AI strategy (Ministry of Business, Innovation and Employment, 2025a) represents significant policy effort. Yet Māori communities continue to face systematic harms from AI systems that formally comply with these frameworks whilst shifting power away from Māori control, expanding surveillance infrastructure, and embedding disproportionate error burdens into essential services.

The fundamental gap is not more principles or additional voluntary guidance. It is enforceable Māori governance across the entire AI lifecycle, grounded in the constitutional relationship established by Te Tiriti o Waitangi (Treaty of Waitangi Act 1975). The Waitangi Tribunal’s Wai 262 inquiry (Waitangi Tribunal, 2011) demonstrated how policy frameworks that treat Māori as stakeholders rather than Treaty partners systematically erode Māori authority. The Wai 2522 inquiry (Waitangi Tribunal, 2020, 2021) showed that infrastructural decisions made without Treaty consistent processes relegate Māori to reactive redress rather than shared authority.

Treaty consistent AI governance requires five testable obligations: Māori decision rights across the lifecycle, independent evaluation with Māori relevant subgroup reporting, vendor bound accountability through procurement, biometric exclusion zones and anti-function creep protections, and a public register of high impact systems. These are not aspirational goals but minimum requirements for governance that respects Māori constitutional rights.

For the international AI community, New Zealand’s lesson is direct: technical governance that ignores Māori constitutional relationships is incomplete governance. If AI is to be genuinely “responsible” in contexts involving Māori, it must be treaty consistent with decision rights, independent assurance, and remedies that function when systems fail. Anything less is compliance theatre.

 

 

References

Business.govt.nz. (2025, July 18). New guidance to help you use AI responsibly. https://www.business.govt.nz/

Ministry of Business, Innovation and Employment. (2025). New Zealand’s strategy for artificial intelligence: Investing with confidence. https://www.mbie.govt.nz/

Ministry of Business, Innovation and Employment. (2025, July). Responsible AI guidance for businesses. https://www.mbie.govt.nz/

Treaty of Waitangi Act 1975, Schedule 1 (N.Z.). https://www.legislation.govt.nz/act/public/1975/0114/latest/DLM435368.html

Privacy Act 2020 (N.Z.). https://www.legislation.govt.nz/act/public/2020/0031/latest/LMS23223.html

Digital Identity Services Trust Framework Act 2023 (N.Z.). https://www.legislation.govt.nz/act/public/2023/0026/latest/LMS90123.html

Office of the Privacy Commissioner. (2025). Biometric processing privacy code 2025. https://www.privacy.org.nz/

Stats NZ. (2020, July). Algorithm charter for Aotearoa New Zealand. https://data.govt.nz/

Waitangi Tribunal. (n.d.). Māori and English texts. https://waitangitribunal.govt.nz/

Waitangi Tribunal. (2011, July 2). Ko Aotearoa Tēnei: Report on the Wai 262 claim released. https://waitangitribunal.govt.nz/

Waitangi Tribunal. (2020, May 14). Tribunal releases report on the CPTPPA (Wai 2522, Stage 2). https://waitangitribunal.govt.nz/

Waitangi Tribunal. (2021, November 18). Tribunal releases report on CPTPP (Wai 2522, Stage 3). https://waitangitribunal.govt.nz/

New Zealand Government Chief Digital Office. (2025, February). Responsible AI guidance for the public service: GenAI. Department of Internal Affairs.

DISCLAIMER: This post is the personal opinion of Dr Karaitiana Taiuru and is not reflective of the opinions of any organisation that Dr Karaitiana Taiuru is a member of or associates with, unless explicitly stated otherwise.

Archive