Experts Question Legality of AI Financial Advice in NZ

Experts Question Legality of AI Financial Advice in NZ

A significant and emerging legal challenge within New Zealand’s financial sector is rapidly coming into focus as industry experts question whether generative Artificial Intelligence chatbots are operating in breach of the nation’s stringent financial advice laws. The central issue revolves around the Financial Markets Conduct Act (FMCA) and whether its established regulatory framework is equipped to handle the swift advancements in AI technology, particularly its growing capacity to provide specific financial product recommendations directly to consumers. The consensus view suggests that when these AI tools move beyond providing general information to recommending particular products, they are very likely crossing a legal line established to protect retail clients. This has created a critical regulatory gray area that demands urgent and decisive attention from the Financial Markets Authority (FMA), the country’s primary financial regulator, to safeguard consumers from unregulated and potentially harmful advice.

The Call for Decisive Regulatory Action

The core of this concern was powerfully articulated by Tim Williams, a partner at the law firm Chapman Tripp, during a recent address to over 450 members of the Insurance Brokers Association of New Zealand (IBANZ). Williams presented a compelling argument that generative AI chatbots may be violating the FMCA’s rigorous licensing and regulatory requirements, prompting him to issue a direct call for the FMA to launch a formal investigation into the matter. The primary objective of such an investigation would be to ascertain if the existing legislation can appropriately respond to the rapid evolution of AI capabilities. Williams stressed the critical need for a deliberate policy decision on how to balance the public’s interest in accessing useful financial information with the profound risks associated with consumers receiving unsuitable advice from an unlicensed, unregulated source. The fundamental issue is that these AI-driven interactions occur entirely outside the comprehensive consumer protections mandated by the FMCA for all regulated financial advice.

An investigation by the FMA would need to address the deep-seated principles of consumer protection that form the bedrock of New Zealand’s financial legislation. The current situation presents a scenario where consumers might receive advice that appears personalized and authoritative but lacks any of the legally required safeguards, such as suitability assessments, transparent disclosure of conflicts of interest, or access to a dispute resolution scheme. Williams’ call for action was not merely a technical legal observation but a warning about a growing systemic risk. Without a clear regulatory response, there is a danger that a two-tiered system of advice could emerge: one that is fully regulated, compliant, and accountable, and another that is opaque, unregulated, and offers no recourse for consumers who suffer financial harm. A deliberate policy decision is therefore essential to prevent the erosion of trust in the financial advisory system and to ensure a level playing field where all providers of financial advice are held to the same high standards.

Defining the Critical Line Between Information and Advice

A central theme of the debate is the precise legal distinction between the permissible sharing of information and the prohibited giving of advice. Tim Williams carefully delineated what AI chatbots are legally allowed to do within the current New Zealand framework. These sanctioned activities include providing pure, objective factual information, such as the current interest rate on a specific bank’s savings account. AI is also permitted to offer advice on financial products as a general class, for instance, by explaining the typical characteristics of a term deposit or a mutual fund without recommending a specific one from a particular provider. Furthermore, it can act as a simple conduit, passing on financial advice that originates from another licensed person or entity. However, the critical point of contention, and the likely point of legal breach, occurs when these AI tools are prompted to go further, a capability many of them now possess. This legal line is crossed the moment the AI makes a specific recommendation of an individual financial product to a retail client.

This act of recommending a specific product is legally defined in New Zealand as a licensed activity, which brings with it a host of compliance duties and professional standards that current AI platforms are not equipped to meet. A licensed financial adviser is required by law to understand a client’s individual financial situation, needs, and goals before making any recommendation. They must adhere to a Code of Professional Conduct, prioritize the client’s interests, and provide clear, concise, and effective disclosure documents. AI chatbots, in their current form, do not perform these functions. They operate without a license, without a human’s professional judgment, and without the accountability framework that underpins the entire regulatory system. Consequently, when a chatbot suggests that a user should invest in a particular company’s stock or purchase a specific insurance policy, it is stepping into the role of a financial adviser without possessing the legal authority or the ethical and compliance infrastructure to do so.

Consumer Protection and Cross Border Concerns

The overarching trend identified by legal experts is a growing concern about the potential for significant consumer harm and the steady erosion of hard-won regulatory safeguards. Williams emphasized that this issue should be of political concern because the direct effect of this unregulated activity is to deny recipients of AI-generated financial advice the substantial protections they are legally entitled to under the FMCA. These protections are specifically designed to ensure that any advice received is suitable, transparent, and provided by a qualified professional who is accountable for their recommendations. The concerns in New Zealand are not isolated; they mirror similar expressions of alarm in Australia, particularly regarding the suitability and legality of AI chatbots providing specific share trading recommendations. This shows the issue is not unique to New Zealand’s legal framework but is a broader challenge that regulators across jurisdictions are beginning to grapple with as AI technology becomes more sophisticated and accessible.

To clarify the legal position, Williams provided a clear, actionable definition of what would constitute a breach of New Zealand law as it currently stands. If an AI chatbot, operating in the ordinary course of its business, makes a recommendation to or for retail clients, gives an opinion about a specific financial product, designs a personalized investment plan, or provides other specific types of financial planning without holding a license, it would be illegally giving financial advice. This interpretation is further reinforced by the FMA’s own historical guidance on “robo-advice.” The regulator has previously established a clear position that any entity, human or automated, providing such tailored advice to retail clients in New Zealand must secure a financial advice provider license and comply with all associated duties. This precedent indicates that the FMA’s existing principles are directly applicable and that these advanced AI chatbots are likely operating in violation of established regulatory expectations.

An Industry United on Professional Standards

The perspective of the professional financial services industry was robustly represented by Katherine Wilson, the chief executive of IBANZ, who voiced strong support for the concerns raised by Williams. Demonstrating the seriousness with which the industry views this emerging threat, Wilson confirmed that IBANZ has formally raised the issue with the Financial Markets Authority. This proactive step is particularly relevant given that the FMA already has a dedicated workstream focused on ensuring New Zealanders have broad access to quality financial advice, placing this new technological challenge squarely within the regulator’s existing purview. The industry’s formal engagement signals that this is not merely an academic legal debate but a practical and urgent matter for professionals who are committed to upholding the integrity of their field. It positions the rise of unregulated AI advice as a direct challenge to the established standards of the profession.

Wilson drew a stark contrast between the unknown and unregulated quality of AI advice and the high standards consistently maintained by her organization’s members. She highlighted that IBANZ members employ several thousand qualified human financial advisers and that the organization operates a comprehensive professional development program to ensure these advisers are well-equipped to provide high-quality, compliant, and ethical financial advice. Her primary concern centers on the “questionable quality” of the information and recommendations available through AI tools and the “potential harm” that inaccurate, misleading, or unsuitable advice could inflict upon consumers. This perspective underscores the value of human oversight, professional judgment, and accountability, qualities that are currently absent in AI-generated advice. The industry’s position is clear: while technology can be a valuable tool, it cannot replace the rigorous standards and ethical obligations that define professional financial guidance.

Navigating Professional Obligations in the Age of AI

Beyond the systemic regulatory issues, the discussion also synthesized crucial guidance for practicing financial advisers who may be considering integrating AI tools into their own workflows. Tim Williams extended his analysis to warn these professionals about their own potential liability. He advised that advisers using generative AI must be acutely aware of the risk of breaching their own obligations under the Code of Professional Conduct for Financial Advice Services. If an adviser relies on AI-generated content when formulating recommendations for a client, they remain fully responsible for that advice. They must be able to create and maintain clear records that demonstrate their reliance on the tool was reasonable, verifiable, and professionally justified. Failure to do so could result in a personal breach of their professional duties, as the responsibility for the advice ultimately rests with the licensed individual, not the technology they used to help generate it.

Furthermore, Williams highlighted a critical data privacy and security consideration that advisers must manage diligently. He strongly advised financial professionals to “de-personalize” their prompts when interacting with any public or third-party AI tools. This means meticulously removing all personally identifiable client information, such as names, addresses, or specific financial details, before submitting a query. This practice is essential to ensure that confidential client data is only accessed by authorized personnel within a licensed financial advice provider, which is a key requirement under the Code Standards. Sharing sensitive client information with an external AI platform could constitute a significant privacy breach, exposing both the client and the adviser to risk. This cautionary advice underscores that the professional use of AI carries its own set of compliance and data security responsibilities that must be proactively and diligently managed.

A Regulatory Crossroads

The expert analysis and industry response had firmly established that the intersection of generative AI and financial services in New Zealand created a significant legal and ethical challenge. The debate moved beyond theoretical concerns to highlight tangible risks of consumer harm and the potential undermining of a carefully constructed regulatory framework. The arguments presented by legal and industry leaders made it clear that the nation’s existing laws were not designed with such sophisticated and accessible technology in mind. This placed the responsibility squarely on the Financial Markets Authority to provide clarity. The path forward depended entirely on a decisive response that could either interpret and apply existing legislation to this new context or begin the process of developing a new framework. Ultimately, the dialogue had defined a critical crossroads where the future of financial advice regulation had to be determined to ensure consumer protection remained the paramount principle in an era of rapid technological change.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later