
Seeing is no longer believing in the age of images and videos generated by artificial intelligence (AI), and this is having an impact on elections in New Zealand and elsewhere.
Ahead of the 2025 local body elections, voters are being warned by and not to automatically trust that what they are looking at is real.
Deepfakes – images or video created with the use of AI to mislead or spread false information – were used in . Early in the campaign, a deepfake voice clip impersonating then president Joe Biden told voters not to cast a ballot vote in New Hampshire’s primaries.
There have also been on the campaign trail in Australia. The Labor Party, for example, released an on its TikTok account.
But the worry is not just that deepfakes will spread lies about politicians or other real people. AI is also used to create “synthetic deepfakes” – images of fake people who do not exist.
Using artificially generated images and videos of both real and fake people raises questions around transparency and the ethical treatment of cultural and ethnic groups.
Cultural offence with AI isn’t a hypothetical concern. Australian voters have found some and , with one white female politician using auto-tuned rapping in her campaign.
Australians have also reported an increase in deepfake political content. The majority were .
Several countries including and are considering laws to manage the harms of AI use in political messaging.
Others have already passed legislation banning or limiting AI in elections. South Korea for example, 90 days before an election. misrepresenting political candidates.
While New Zealand has several voluntary frameworks to address the growing use of AI in media, there are no explicit rules to prevent its use in political campaigns. To avoid cultural offence and to offer transparency, it is crucial for political parties to establish and follow clear ethical standards on AI use in their messaging.
Existing frameworks
The film industry is a good starting point for policymakers looking to establish a clear framework for AI in political messaging.
In my ongoing research about culture and technology in film production, industry workers have spoken about New Zealand’s world-leading standards on culturally aware film production processes and the positive impact this had on shaping AI standards.
Released in March 2025, the New Zealand Film Commission’s takes a “people first” approach to AI which prioritises the needs, wellbeing and empowerment of individuals when developing and implementing AI systems.
The principles also stress respect for and transparency in the use of AI so that audiences are “informed about the use of AI in screen content they consume”.
The government’s , meanwhile, requires government agencies to publicly disclose how AI systems are used and to practice human-centred values such as dignity and self-determination.
AI in NZ politics
Meanwhile, the use of AI by some of New Zealand’s political parties has already raised concerns.
During the 2023 election campaign, the in their attack advertisements. And recent social media posts using AI by New Zealand’s ACT party were criticised for their lack of transparency and cultural sensitivity.
An ACT Instagram post about interest rate cuts featured an AI generated from the software company Adobe’s stock photo collection.
Act whip Todd Stephenson responded that using stock imagery or AI-generated imagery was not inherently misleading. But he said that the party “would never use an actor or AI to impersonate a real person”.
My own search of the Adobe collection came up with other images used by ACT in its Instagram posts, including an AI generated image labelled as “studio photography portrait of a 40 years old Polynesian woman”.
There are two key concerns with using AI like this. The first is that ACT didn’t declare the use of AI in its Instagram posts. A lack of transparency around the use of deepfakes of any kind can . Voters end up uncertain about what is real and what is fake.
Secondly, the images were synthetic fakes of ethnic minorities in New Zealand. There have long been concerns from academics and technology experts that of diverse communities.
Legislation needed
While the potential for cultural offence and misinformation with faked content is not new, AI alters the scale at which such fakes can be created. It makes it easier and quicker to produce manipulative, fake and culturally offensive images.
At a minimum, New Zealand needs to introduce legalisation that requires political parties to acknowledge the use of AI in their advertising. And as the country moves into a new election season, political parties should commit to combating misinformation and cultural misrepresentation.
, Lecturer, Anthropology,
This article is republished from under a Creative Commons license. Read the .