blog

Te Kete o Karaitiana Taiuru (Blog)

Human Resilience with AI

I was honoured to be one of the 386 international AI ethics experts to contribute a short essay to the “Building a Human Resilience Infrastructure for the Age of AI: Experts Call for Radical Change Across Institutions, Social Structures” report and contribute a Te Ao Māori perspective to the Epistemic Vigilance: Discerning Truth, Illusion and Misinformation chapter.

The essay is below. The full publication can be downloaded for free here.

There is also a short summary here.

The key priorities are that we all must take an all-encompassing systems response by leaders of all walks of life to serve humanity’s best interests in an all-encompassing AI environment.


“From an Indigenous Peoples perspective, and in particular Māori, the Indigenous Peoples of New Zealand perspective, AI is likely to become a significant and, in many areas, beneficial force shaping society. However, if AI is allowed to develop and deploy without Indigenous authority, it will replicate the familiar pattern: innovation proceeds quickly and Indigenous peoples are left managing the harms. The immediate priorities are therefore not merely adoption or innovation but cultural protection of traditional knowledge, enforceable intellectual property and related rights and robust partnership terms with government and large technology companies to ensure bias, discrimination, cultural appropriation and racism are not embedded at scale.

“Māori have already experienced successive waves of technological change that carried colonial dynamics: the telephone, the early internet and World Wide Web, social media platforms and now AI. Each wave brought genuine benefits, connection, information access, economic and social opportunity, while also accelerating extraction, misrepresentation and dependency on externally owned infrastructure. Too often, Māori were positioned as end-users rather than co-designers, regulators or owners. AI differs because it does not only transmit content; it learns from data, encodes patterns into models and then drives automated judgments and persuasive systems. That makes it uniquely powerful and uniquely risky for communities whose knowledge, identity markers, language and cultural expressions have historically been appropriated, misinterpreted or ignored.

A critical community issue is deciding what traditional knowledge should be shared with AI systems, under what conditions and what knowledge should never be digitised or externalised. Communities will need deliberate discussions guided by cultural protocols and local authority about tiered access.

“Māori were not leading participants in earlier technology revolutions. With AI, that is changing. Māori are increasingly taking strategic leadership positions within governance bodies, advisory roles, research programmes and Māori enterprises to shape how AI is used and regulated. This leadership must be translated into practical power: procurement standards, data governance controls, licensing models for cultural works and enforceable requirements for transparency and contestability in any high impact automated decision-making system.

“If Māori simply ignore AI, the risk is not neutral falling behind but a rapid re-colonisation through technology, an intensified extraction of cultural value, increased surveillance and control and the displacement of Māori knowledge systems by automated tools that do not carry context or accountability. The rapid pace of AI-driven change can create cultural erosion, and missed opportunities for self-empowerment, global influence and economic development could occur faster than in any previous technological shift.

“At the community level, resilience will require an honest acceptance that there will be trade-offs. The goal is not to treat AI as inherently good or bad, but to establish boundaries to protect what must be protected while enabling benefits that strengthen communities. Consider art as a practical example of technological evolution. Indigenous artistic practice has always interacted with tools, from natural and hand-made instruments to the adoption of metal implements, to electrical tools, then to digital creation through computers. AI now enters as a tool that can generate, remix and imitate styles at scale. That raises legitimate concerns about theft, dilution and misattribution, but it also creates pathways for new Indigenous creativity and new markets. The strategic challenge is to build mechanisms that differentiate authentic Indigenous art, whether created with AI assistance or not from extractive imitation. This includes provenance standards, certification marks, community-defined authenticity criteria and licensing models that require consent and compensation when Indigenous styles or cultural elements are used for training or commercial outputs.

AI can increase the reach and speed of surveillance through facial recognition, predictive analytics and risk-scoring systems. Yet the same technical capabilities of pattern recognition, remote sensing, anomaly detection can be used for public good. AI-enabled tools can support conservation of endangered species, improve monitoring of ecosystems, assist pest eradication programmes and strengthen traditional knowledge through better environmental intelligence.

“A critical community issue is deciding what traditional knowledge should be shared with AI systems, under what conditions and what knowledge should never be digitised or externalised. Communities will need deliberate discussions guided by cultural protocols and local authority about tiered access: knowledge that can be public, knowledge that can be shared only under strict conditions and knowledge that must remain within place based and relational contexts. This must be paired with practical plans to sustain the living sources of knowledge: ensuring individuals and communities can return to traditional places, maintain language and practice and transmit sacred knowledge through embodied relationships rather than through systems designed for replication and scale. Digital tools must not become the default container for what should remain human knowledge only.

“Surveillance is a real concern, particularly given historical and contemporary state monitoring of Indigenous communities. AI can increase the reach and speed of surveillance through facial recognition, predictive analytics and risk-scoring systems. Yet the same technical capabilities of pattern recognition, remote sensing, anomaly detection can be used for public good. AI-enabled tools can support conservation of endangered species, improve monitoring of ecosystems, assist pest eradication programmes and strengthen traditional knowledge through better environmental intelligence.

“AI can also help identify images and archival artefacts whose provenance or identities have long been lost, enabling reconnection and restoration provided this work is done with cultural authority, appropriate permissions and safeguards against further appropriation.

“The pathway forward is therefore not passive acceptance or blanket rejection, but Indigenous community-led governance. That means setting terms for partnerships with government and major technology firms about clear rules about consent, benefit sharing, data protection, cultural safety, auditing for bias and enforceable accountability when harms occur.

“It also means investing now in Māori capability across AI policy, model evaluation, procurement and digital cultural infrastructure so Māori are not simply consulted, but are deciding, building and owning. If AI is to play a significant and beneficial role for Māori, it must be aligned with cultural norms and ensure that technology strengthens people and culture, rather than extracting from them.”

DISCLAIMER: This post is the personal opinion of Dr Karaitiana Taiuru and is not reflective of the opinions of any organisation that Dr Karaitiana Taiuru is a member of or associates with, unless explicitly stated otherwise.

Archive