Artificial intelligence: your rights

When self-driving cars hit the market, politicians and academics alike questioned whether our criminal laws could deal with the new tech.
If the car had a prang or mowed down a bystander, who would stand in the dock at court and face the jury – the driver who had no control over the accident or the AI-programmed hunk of metal that could steer itself?

In fact, the law had dealt with the question of self-driving cars hundreds of years before the vehicles would even be invented. When horses were the in-vogue mode of transport, their riders generally faced responsibility for the horses’ actions even though the animals could act of their own accord.
However, some legal experts agree that our consumer laws will need to adapt at speed to keep up in this brave new world.
AI in a nutshell
Artificial intelligence (AI) generally refers to the ability of computer systems to do tasks that would historically require human intelligence.
As Nick Gelling, product test writer at Consumer NZ, said, “Artificial intelligence is a broad field of research that’s been expanding for decades. Depending on how you define it, it’s been developing since the invention of the modern computer – in 1950, Alan Turing proposed a test to work out whether a computer could think.”
Gelling said that, until recently, most consumers would only have experienced AI to sort or categorise data, and they might not even have realised they’d been working with AI to do this.
“For example, email services use AI to detect spam, and facial recognition systems use AI to match photos to existing records. These are called discriminative models, and they’re also used extensively in industries like medical research and finance.”
The trendy AI models being hyped more recently, and the ones causing the most concern, are generative rather than discriminative. “They’re designed to generate new data rather than just analyse existing data,” Nick said.
A popular example is OpenAI’s ChatGPT. Users can ask the model to answer a question, write text or problem-solve.
Microsoft has recently integrated a similar AI model, called Copilot, into its Office software. It’s these generative AI models that could pose the most harm and therefore should be front of mind in any law reform process.
Your rights when it comes to AI
How might developing AI models interact with our consumer laws, now and in the future, and how could we update our laws to make them more robust against any detrimental effects of AI?
Below, we take a look at the Consumer Guarantees Act and the Fair Trading Act to see how well these acts currently respond to potential AI issues.
AI and the Consumer Guarantees Act
The Consumer Guarantees Act (CGA) provides a scheme for rights and remedies when consumers have problems with goods or services. It has a broad application, and there are many ways an issue involving AI might manifest itself in a CGA context.
Scenario: You purchase a pair of trousers online and, when they arrive, they look nothing like the picture that had attracted you to buy them. Unbeknownst to you, the picture had been generated by AI. What are your rights?
The application of the law in this instance will likely be the same as if the seller had uploaded a picture of a real pair of trousers and sent the wrong pair. In both scenarios, the seller has breached your right to receive goods that match those goods’ descriptions. Because the trousers don’t match the picture, you’re entitled to a remedy under the CGA. To that end, the seller will usually have the option of giving you either a refund or a replacement. Replacement will depend on whether the seller actually has the right pair of trousers.
Scenario: You buy a fridge with AI built into it. It can do things like write shopping lists based on the food available inside the fridge. Soon after purchase, the AI functionality stops, but the fridge continues to work just like a fridge should. What are your rights?
Even though the fridge is still keeping food cool, you’re entitled to a refund, repair or replacement, whichever the supplier chooses. If you bought the fridge specifically because of its AI functionality, you could argue there was a failure of substantial character and formally reject the fridge. This means it would be your choice as to whether you get a refund or a replacement.
Scenario: You purchase a computer software subscription to an AI model, like ChatGPT or Microsoft Copilot. A month later, the AI stops performing as you expected. What are your rights?
Under the CGA, computer software is classed as a “good” rather than a service. This is a result of a 2003 amendment to the act intending to give consumers clarity about their rights when it comes to software. The distinction between good and service is important because consumer guarantees change depending on whether something is one or the other. However, AI, particularly generative AI, is different to the traditional computer software that legislators had in mind, and it’s difficult to apply the same law to it.
As a good, an AI model must be fit for purpose, but how can consumers trust that something that changes, adapts and has the potential to produce false outputs (also called hallucinations) will remain fit for purpose?
AI learns from data and changes its performance over time rather than requiring direct human programming, like regular software. Its ability to change might mean it transforms into an entirely different thing from what you originally purchased.
Under the CGA, an AI model must also be of acceptable quality, which includes durability and safety. Exactly how a piece of software that is inherently adaptable is meant to be durable isn’t obvious. Perhaps developers would be required to update and maintain models for a reasonable time to ensure the software doesn’t devolve. But if the model changes materially, would it breach its guarantee of acceptable quality?
Generative AI’s ability to produce sexually explicit, graphic and biased material might cause safety issues. The question is whether any of these instances would be captured by the CGA’s safety requirement, which, despite being undefined, tends to evoke concepts of physical safety, like mandatory standards for children’s toys to prevent choking. AI models might not pose a choking hazard, but interacting with them could affect a person’s mental health and wellbeing, with one 2024 news report suggesting an AI chatbot “manipulated” a young man into committing suicide.
There are several questions raised in this scenario that don’t have clear answers yet. Where necessary, law reform might help to clarify them.
AI and the Fair Trading Act
The Fair Trading Act (FTA) combats misleading and unfair trade practices, among other things.
Like the CGA, there are a range of ways AI might interact with the FTA and its general prohibition against misleading and deceptive conduct. Instances where a trader misleads you about an AI product are likely to be fairly easy to deal with under the act. That is, the act prohibits misleading conduct regardless of what a trader misleads you about, whether it be a surfboard or an environmental certification.
Scenario: A developer says its AI model can generate images and text, but you buy the software, and it only generates text. What are your rights?
This scenario covers the CGA right to goods that match their description and is a clear-cut example of misleading or deceptive conduct prohibited under the FTA.
But what happens when, rather than the developer, it’s the AI model itself that misleads you?
Scenario: Imagine that a new pair of shorts you bought online arrives at your door with a rip at the seam. It’s a clear breach of the CGA. Yet the seller’s AI chatbot says you don’t have any rights to a refund, repair or replacement. Is the chatbot correct or is it misleading you, and what are your rights now?
Because the new shorts arrived ripped, your right to goods that are durable has been breached, and you’re entitled to a refund, repair or replacement under the CGA.
But more than that, by stating you aren’t entitled to a remedy, the chatbot has misled you about your consumer rights. Under the FTA, it is illegal for any person in trade to mislead a consumer about their consumer rights. However, to attribute liability under the current legal framework, a person must have misled a consumer. So who is responsible?
It would appear logical to attribute liability to the business selling the pants, as it was the one using the AI chatbot. But the business might be able to argue that the actions of the AI chatbot were beyond the business’ reasonable control. It could be that no one can be held responsible.
This is another example where our current laws might fail to adequately protect consumers from AI harms. We could let the law play out and see if the courts apply the same logic as they did when it came to self-driving cars and horses.
The British Columbia Civil Resolution Tribunal, the Canadian equivalent of our Disputes Tribunal, recently ruled that Air Canada had to refund a passenger after its AI chatbot gave an incorrect explanation of the airline’s bereavement policies. But Canadian decisions might not influence our own.
Instead, the potential for consumer harm might warrant a proactive approach, like reforming the law to tweak the definition of “personhood” or provide a clear way of attributing fault to businesses who fail to take reasonable care to ensure their AI systems don’t breach the FTA.

Do we need bespoke AI laws?
In a July 2024 Cabinet paper, Minister of Science, Innovation and Technology Judith Collins recommended New Zealand law makers take a “proportionate, risk-based approach to AI”.
This means amending existing regimes, like our consumer laws, rather than developing a bespoke act.
Collins said frameworks like our consumer laws “are largely principles-based and technology neutral. These frameworks can be updated as and when needed to enable AI innovation or address AI harms.”
The dangers of not adapting
Kate Tokeley is a legal expert in the field of consumer law and deputy chair of Consumer’s board. Tokeley said that, when it comes to our current consumer laws, the rules we have in place “are based on increasingly outdated notions of how advertising and commercial communication take place.
“Legal systems will need to adapt to these [AI] developments to maintain control of misleading or deceptive commercial speech. … Truth is a cornerstone of a fair and efficient marketplace. The dangers of not adapting are disconcerting.”
Tokeley emphasised that without some legal controls “we might end up in a future where we are subjected to the manipulating forces of commerce in almost every waking moment.”
A game of wait and see
In October 2024, the Australian Treasury released a discussion paper reviewing the impact AI has or will have on Australian consumer law. The review focused on whether the Australian consumer laws are fit for purpose in the age of AI.
Our own consumer laws share similarities with Australia’s. They’ve managed to help protect consumers for multiple decades with minimal updates. But the landscape is changing, and AI has the potential to cause consumers a lot of harm. Legal concepts like durability and safety as well as personhood and theories of liability may need to change to ensure consumer rights legislation remains relevant.
Unfortunately, changing any law is complicated, especially when updates are targeting a constantly evolving technology.
Tokeley said AI has, in essence, opened the doors to a new world of human existence.
“Redesigning legal regimes to effectively battle this new world will no doubt be a challenge. Any meaningful change will require lawmakers to first confront the fact that there are genuinely difficult problems that existing regulatory tools are ill-equipped to handle.”
That’s why a review of our consumer laws, like the review currently underway in Australia, could help regulators decide whether change is needed and on what scale.

Consumer Guarantees Act
We explain what the Act covers, your rights, and what to do if you think your rights have been breached.
Member comments
Get access to comment