blog

Te Kete o Karaitiana Taiuru (Blog)

Robot at a computer monitor in a dark room.

Technology Risks Facing Māori in 2026

The year 2026 represents a critical juncture in the intersection of technology deployment and Māori rights. Multiple technological trends are converging to create unprecedented risks for Māori communities, governance structures, and cultural integrity. This document provides a comprehensive analysis of the primary technology-related risks facing Māori in 2026. It is examined through a governance and rights-based framework.

This analysis does not position Māori as inherently opposed to technological advancement. Rather, it acknowledges that Māori communities face disproportionate exposure to technological harms. These include surveillance, discriminatory automated decision-making, and unauthorised extraction of cultural knowledge. The assessment focuses on seven critical risk domains where immediate governance attention is required.

Introduction

The technological landscape of 2026 differs fundamentally from previous years. What were previously experimental or limited deployments have transitioned into normalised infrastructure. Several concurrent developments characterise this shift:

  • Biometric systems are transitioning from trial implementations to standard operational infrastructure across retail, venue management, and identity verification contexts.
  • Artificial intelligence and Generative Artificial Intelligence systems are being integrated into government services and frontline decision-making, primarily through procurement processes rather than legislative frameworks.
  • Digital identity infrastructure is increasingly functioning as a prerequisite for service access and civic participation.
  • Cybercrime, fraud schemes, and deepfake technologies continue to proliferate while regulatory responses remain inadequate.

This document examines technology risks through a Māori rights and governance framework. This perspective recognises that Māori communities experience distinctive vulnerabilities in technological contexts. These include:

  • Disproportionate surveillance and monitoring
  • Elevated consequences when access to services is restricted or denied
  • Increased risk of unauthorised extraction and commercialisation of cultural knowledge and identity markers

Biometric Systems and Facial Recognition

The Biometric Processing Privacy Code 2025 became enforceable on 3 November 2025, establishing 2026 as the first full calendar year of mandatory compliance. This regulatory milestone creates three likely organisational responses:

  • Formalisation and expansion of existing biometric systems to achieve compliance with the Code’s requirements
  • Withdrawal from high-risk biometric applications (less common)
  • Rebranding of biometric systems as ‘analytics’, ‘safety technology’, or ‘loss prevention’ while maintaining equivalent functional effects

Biometric systems function as automated suspicion infrastructure. They are designed to answer the question: ‘Is this individual the same person as someone on our watchlist?’ When deployed in retail environments, public venues, or quasi security contexts, the risks extend beyond privacy concerns to encompass dignity violations, false positive matches, discriminatory profiling, and function creep whereby narrow initial purposes expand into generalised monitoring.

Once biometric identifiers are captured and stored, they become difficult to contain. These identifiers may evolve into portable identity infrastructure spanning multiple systems. This evolves particularly when linked with digital identity frameworks, closed circuit television networks, access control systems, or third-party risk assessment platforms.

 

Artificial Intelligence in Government Services

New Zealand government agencies are being actively enabled to adopt generative Artificial Intelligence through the Responsible AI Guidance for the Public Service. The country maintains transparency commitments through the Algorithm Charter for Aotearoa New Zealand, which operates on a voluntary, agency signatory basis.

When Artificial Intelligence systems influence decisions regarding eligibility determination, priority allocation, risk assessment, or enforcement actions in health, housing, welfare, education, and justice contexts, Māori communities face three distinct categories of harm:

  • Historical dataset bias: Patterns of historical under-service become interpreted as evidence of lower need, perpetuating existing inequities.
  • Proxy discrimination: Geographic location, educational institution attended, and whānau network patterns may function as proxies for ethnicity, creating discriminatory effects without explicit ethnic categorisation.
  • Procedural harm: Affected individuals frequently cannot access, challenge, or correct what automated systems have determined about them.

Generative Artificial Intelligence introduces an additional failure mode. It involves the automated generation of summaries, case notes, or recommendations that are linguistically fluent and appear authoritative but contain factual errors or inappropriate inferences. The harm is compounded when staff members trust these outputs due to their professional presentation. This potentially leads to decisions based on incorrect information.

Digital Identity Infrastructure

The Digital Identity Services Trust Framework Act 2023 and associated governance structures, including a statutory Māori Advisory Group, form part of a broader transition toward digital identity infrastructure. The Trust Framework Rules (consolidated reference version) and agreed engagement protocols involving the Trust Framework Board and Māori Advisory Group establish the operational parameters for this system.

Digital identity systems can provide genuine benefits when implemented with meaningful voluntariness and inclusivity. However, these systems become harmful when they evolve into:

  • Soft compulsion: Services that theoretically remain accessible through alternative channels (telephone, in-person) but are functionally available only through digital identity verification.
  • Tiered service access: Creation of preferential ‘fast lane’ access for digital identity holders while relegating others to degraded service quality.
  • Dataset linkage infrastructure: Stable identifiers that enable cross-system correlation and the creation of comprehensive profiles spanning multiple domains of activity.

In practical terms, digital identity infrastructure can quietly establish the foundation for whole-of-life profiling without explicit policy acknowledgement of this function.

 

Offshore Cloud and Jurisdictional Risks

The majority of generative Artificial Intelligence and contemporary digital platforms operate through offshore cloud ecosystems. New Zealand’s system leader guidance explicitly addresses cloud jurisdictional risk. Additionally, the Privacy Act establishes requirements through Information Privacy Principle 12 regarding disclosure of personal information outside New Zealand. These include requirements for comparable safeguards or other permitted grounds.

For Māori organisations, offshore cloud jurisdiction presents risks beyond standard privacy concerns:

  • Loss of control over taonga data, including information linked to whakapapa
  • Vendor terms of service that permit broad reuse, auditing, or analysis of stored data
  • Practical difficulty enforcing rights across international borders
  • Ambiguity regarding Artificial Intelligence training processes, including what data is retained, what patterns are learned, and what information is shared within vendor ecosystems

The year 2026 will likely witness increased organisational adoption of Artificial Intelligence features. They are often bundled into office productivity suites, customer relationship management systems, and collaboration tools. The critical risk is that procurement decisions, often made for convenience or cost efficiency, become de facto sovereignty decisions. These decisions carry long-term implications for data control and cultural authority.

 

Cybersecurity and Ransomware

New Zealand’s National Cyber Security Centre reports managing approximately one incident per day with potential for national level harm. For Māori trusts, service providers, iwi authorities, and Māori small to medium enterprises typically operating with constrained resources while serving communities with immediate needs, cyber harm manifests as:

  • System lockouts preventing service delivery
  • Unauthorised disclosure of whānau data
  • Financial losses from fraud
  • Service disruption affecting vulnerable community members
  • Reputational damage that may undermine trust relationships for extended periods

The most prevalent attack pathways remain fundamentally unchanged. These include credential theft, phishing campaigns, inadequate authentication mechanisms, and unpatched systems. However, these traditional vectors are now enhanced by Artificial Intelligence-generated deception and deepfake voice synthesis. These significantly increase their effectiveness.

Deepfake Technology and Digital Harm

Deepfake technology is not a prospective risk but an active harm affecting individuals in Aotearoa New Zealand. Radio New Zealand reported in November 2025 that Netsafe received hundreds of complaints. These regarded non-consensual sexually explicit deepfakes during 2025. Netsafe’s reporting documents record levels of digital harm, including Artificial Intelligence-enabled abuse, sextortion schemes, and fraud.

Legal reform is advancing, though at a measured pace. A Deepfake Digital Harm and Exploitation Bill is currently listed on the legislative agenda. However, the harm from deepfake content extends beyond digital spaces, fabricated material appears in educational institutions, workplaces, and whānau gatherings, with potential to escalate into physical harm.

Evidence demonstrates that wāhine and rangatahi experience the most severe targeting from deepfake technology and technology-facilitated abuse. This pattern reflects broader dynamics of gendered and age-based digital violence.

 

Electoral Manipulation and Foreign Interference

The Electoral Commission confirms that the next general election is scheduled for 2026, with the final legally permissible date being 19 December 2026. Electoral periods intensify several threat categories:

  • Coordinated disinformation campaigns
  • Synthetic audio and video content designed to misattribute statements
  • Targeted intimidation of candidates and community leaders
  • Deliberate attempts to polarise communities and exacerbate social divisions

The New Zealand Security Intelligence Service (SIS) has publicly described foreign interference as targeting both political systems and broader social structures (societal interference). The Ministry of Justice notes legislative changes to strengthen New Zealand’s response to foreign interference and espionage. The relevant legislation entered into force in late 2025.

Māori issues are regularly instrumentalised as wedge political topics and culture war narratives. Deepfake technologies and micro-targeted influence campaigns reduce the cost and complexity of such manipulation. This makes it accessible to a broader range of actors with diverse motivations.

 

Commercial Extraction of Māori Cultural Knowledge

Beyond surveillance and decision-making harms, 2026 presents risks from commercial extraction of Māori cultural assets:

  • Māori visual designs, language elements, and cultural markers being incorporated into Artificial Intelligence training datasets without authorisation
  • Development and sale of ‘Māori-themed’ Artificial Intelligence products without Māori governance or consent
  • Utilisation of Māori identity as brand aesthetics while disregarding Māori rights and authority

This risk category is less defined by specific legislation than by Artificial Intelligence market dynamics. These dynamics reward rapid capture of attention and data. They frequently treat Indigenous rights as secondary considerations or afterthoughts.

 

Wearable recording devices in culturally sensitive spaces

Smart glasses, body worn cameras, and always recording wearables are moving from early adopter tech to everyday accessories. Meta’s Ray-Ban smart glasses, Apple’s Vision Pro, and similar devices can now record video and audio continuously, often with no obvious indicator that recording is active.

For Māori, this creates profound risks in spaces where kawa, tikanga, and tapu govern what can be recorded, shared, or observed.

  • Marae spaces where specific protocols determine who may speak, what may be photographed, and when recording is appropriate. A visitor wearing smart glasses could unknowingly (or deliberately) capture pōwhiri, whaikōrero, or karakia that are tapu or culturally restricted
  • At tangi, deeply private moments of grief where whānau vulnerability and cultural practices should be protected from extraction, broadcast, or commercial use could be live streamed or video shared later online in places like social meda.
  • Te Matatini and cultural performances that embody generations of mātauranga, where intellectual property rights, performance rights, and cultural authority matter, but where anyone in the crowd could be livestreaming or training AI models on haka, waiata, and poi.
  • Wānanga and knowledge transmission spaces where mātauranga is shared under specific conditions, not for general consumption or AI scraping are also at risk.

The risk is the loss of control over taonga, and the exposure of practices that were never meant for mass distribution. Once recorded and uploaded, that content can be used to train AI, sold as stock footage, or shared in contexts that violate the kaupapa of the original event.

Unlike traditional cameras (which are visible and can be challenged), wearable devices are designed to be inconspicuous. By the time someone realises they’ve been recorded, the damage is done, and the content may already be synced to cloud servers offshore, beyond the reach of New Zealand law or tikanga Māori.

 

Governance and Organisational Responses

  • Effective responses to these risks do not require technological perfection but rather governance discipline and principled decision-making. The following recommendations are designed for practical implementation by boards, leadership teams, and organisational management.
  • Declare specific technologies as prohibited absent compelling justification: no biometric systems without genuine opt-out mechanisms and documented necessity; no generative Artificial Intelligence for decisions affecting rights or entitlements without strict safeguards. Utilise the biometric compliance deadline as a catalyst for policy formalisation.
  • Identify all contexts where automated systems can deny services, establish priorities, assign risk labels, or trigger enforcement actions. Require transparency mechanisms, human override capability, appeal processes, and ongoing monitoring for all such systems.
  • Require jurisdiction assessment and data exit strategies for all cloud and Artificial Intelligence procurements, not merely pricing and feature comparisons. Recognise that convenience driven procurement decisions may have long-term sovereignty implications.
  • Integrate Information Privacy Principle 12 assessments into contracting and vendor onboarding processes, particularly for Artificial Intelligence tools embedded in office platforms. Ensure offshore data transfer decisions receive appropriate scrutiny.
  • Implement multi-factor authentication universally, establish restorable backup systems, enable comprehensive logging, maintain patch currency, and develop incident response protocols. Reference National Cyber Security Centre threat reporting as a foundation for board-level risk assessment.
  • Establish verification procedures for sensitive instructions, including callb ack protocols, two-person verification requirements, and agreed authentication phrases. Develop rapid-response communication plans for deepfake incidents.

 

Actions for Whānau and Communities

  • Assume that sophisticated scams will utilise synthesised voices of trusted individuals. Verify all requests for money or sensitive information through a secondary communication channel.
  • Enable multi-factor authentication on all accounts. Eliminate password reuse across services. These measures alone prevent a substantial proportion of common attacks.
  • Act in a judicious way when sharing high quality facial images and voice recordings publicly, particularly for rangatahi. Deepfake generation tools perform optimally with clean input data.
  • If targeted by image-based abuse or deepfake content, treat it as a safety concern rather than a source of shame. Seek support early. Netsafe provides guidance and support pathways for affected individuals.
  • As wearable devices become cheaper and more common, Māori organisations and event organisers will need clear policies on wearable tech at cultural events, stronger enforcement mechanisms, and potentially technology detection tools. The alternative is accepting that every marae, every tangi, every kapa haka performance becomes fair game for passive surveillance and extraction.

 

Conclusion

The common thread connecting all identified risks is the concentration of power. It concerns who possesses the authority to identify individuals, construct profiles, determine eligibility, control access to services, establish truth claims, and derive profit from identity and cultural knowledge.

The year 2026 presents an opportunity to transition from reactive harm mitigation to proactive Māori-led technology governance. This transition requires:

  • Establishment of clear technological boundaries and exclusion zones
  • Disciplined procurement processes that recognise sovereignty implications
  • Transparent decision-making systems with meaningful accountability
  • Practical resilience measures addressing deepfakes, cybercrime, biometrics, and automated decision-making

Implementation of these measures does not require technological expertise beyond organisational capacity. Instead, what is required is governance commitment, clear policy frameworks, and consistent application of principled boundaries. The convergence of technological trends in 2026 makes such governance discipline not merely advisable but essential for protecting Māori rights, dignity, and self-determination.

DISCLAIMER: This post is the personal opinion of Dr Karaitiana Taiuru and is not reflective of the opinions of any organisation that Dr Karaitiana Taiuru is a member of or associates with, unless explicitly stated otherwise.

Archive