In a decisive move to address the complexities of modern surveillance and national security, Canada’s primary intelligence watchdog has initiated a comprehensive review into the burgeoning use of artificial intelligence by the country’s security and spy agencies. This pivotal investigation by the National Security and Intelligence Review Agency (NSIRA), launched on January 1, 2026, aims to thoroughly examine how these powerful new tools are being defined, deployed, and governed. The central goal is to ensure that the integration of AI into operations that protect the nation remains firmly aligned with Canada’s legal framework, democratic values, and ethical standards. This probe represents a critical effort to establish robust oversight in an era of rapid technological advancement, seeking to balance the immense potential of AI with the fundamental rights and privacy of citizens. The findings are expected to shape AI policy and practice within the Canadian intelligence community for years to come.
A Deep Dive into Governance and Technology
NSIRA’s Comprehensive Investigation
The investigation launched by the National Security and Intelligence Review Agency is not a mere reactive measure but a deliberately proactive and forward-looking study aimed at mapping the existing landscape of AI deployment within the national security apparatus. Led by its chair, Marie Deschamps, NSIRA is utilizing its significant statutory authority, which provides the agency with unparalleled access to nearly all government information, including highly classified and privileged material, with the sole exception being cabinet confidences. The methodology for this deep dive is extensive and multifaceted, incorporating everything from formal requests for documents and written explanations to high-level briefings with senior officials. Furthermore, the inquiry will involve direct interviews, targeted surveys, and unprecedented access to government systems, a clear signal of the probe’s depth. A key element that underscores the thoroughness of this review is the potential for “independent inspections of some technical systems,” indicating a hands-on verification process designed to go beyond written assurances and directly assess the technology in use.
This comprehensive approach is fundamentally about building a resilient governance framework before potential risks or ethical gaps become systemic problems. The rapid evolution of artificial intelligence presents both opportunities and challenges, and NSIRA’s review is structured to address both sides of this coin. By examining current uses, from document translation to sophisticated malware detection, the agency intends to guide future policy development and establish clear guardrails for the adoption of more advanced AI systems. This forward-looking stance is critical for ensuring that Canada’s security and intelligence community can innovate responsibly. The ultimate goal is to create a durable system of oversight that can adapt alongside the technology itself, ensuring that legal, ethical, and democratic principles are not just maintained but are actively embedded into the design and deployment of any AI tool used for national security purposes, thereby safeguarding public trust in these powerful institutions as they navigate an increasingly complex digital world.
A Wide Net: Who’s Under Scrutiny?
The scope of this landmark review is remarkably broad, extending far beyond the traditional core of Canada’s national security establishment. While the Canadian Security Intelligence Service (CSIS), the Royal Canadian Mounted Police (RCMP), and the Communications Security Establishment (CSE), the nation’s cyberspy agency, are central to the investigation, the notification letter was dispatched to a diverse and high-level group of government bodies and officials. This includes a wide array of cabinet ministers, most notably Prime Minister Mark Carney and those responsible for key portfolios such as Artificial Intelligence, Public Safety, Defence, Foreign Affairs, and Industry. This widespread notification underscores the cross-governmental significance of artificial intelligence, framing it not merely as a tool for intelligence agencies but as a technology with profound implications for the entire machinery of government and for Canadian society as a whole. The inclusion of top political leadership highlights the understanding that accountability for AI use rests at the highest levels.
Significantly, the probe’s reach encompasses agencies not typically associated with frontline intelligence or law enforcement, a decision that reflects a modern, holistic conception of national security. Organizations such as the Canadian Food Inspection Agency, the Canadian Nuclear Safety Commission, and the Public Health Agency of Canada are also included in the review. This expansion acknowledges that in the 21st century, threats to national security can emerge from a wide variety of sectors, including public health crises, supply chain vulnerabilities, and critical infrastructure protection. By examining the potential or current use of AI in these areas, NSIRA is addressing the interconnected nature of modern risks. It signals a sophisticated understanding that protecting a nation requires a coordinated, government-wide strategy where advanced technology is governed by a consistent set of ethical and legal principles, regardless of the specific department or agency deploying it. This inclusive approach aims to ensure no gaps exist in the oversight of powerful AI tools.
Aligning on Responsible AI
A Unified Front on Ethical Adoption
A clear and consistent theme emerging from the government and its security agencies is the universal acknowledgment of the need for a responsible and ethical approach to AI adoption. This consensus is formally anchored by the federal government’s overarching principles guiding the use of artificial intelligence. These foundational guidelines advocate for a high degree of transparency regarding how, why, and when AI systems are used in government operations, ensuring the public is appropriately informed. Moreover, they mandate the proactive assessment and meticulous management of any risks that AI tools might pose to legal rights, privacy, and democratic norms. A crucial third pillar of this framework is the requirement for comprehensive training for all public officials involved with the technology, equipping them with a deep understanding of the legal, ethical, and operational dimensions of their use. This principled foundation sets a high standard for accountability across all departments.
In response to the NSIRA review, the agencies themselves have publicly echoed these commitments, signaling a cooperative and aligned stance. The Royal Canadian Mounted Police, for example, issued a formal statement welcoming the independent review, describing such oversight as “critical to maintaining public confidence and trust” in law enforcement. The RCMP further emphasized that it has already implemented its own internal guidelines to ensure that any use of artificial intelligence is conducted legally, ethically, and responsibly. These internal protocols are specifically designed to address key challenges, focusing on mitigating bias in algorithms, rigorously respecting privacy rights, and establishing clear lines of accountability for any decisions supported by AI systems. This public declaration of support and internal preparedness demonstrates a clear commitment from the security establishment to embrace technological advancement within a framework of robust ethical governance.
Intelligence Agencies’ Current AI Playbook
Canada’s principal intelligence services are not waiting for directives but are already actively implementing AI technologies in a manner they assert is consistent with the established ethical boundaries. The Canadian Security Intelligence Service has reported that it is currently engaged in several AI pilot programs, all of which are being developed and tested in alignment with the government’s guiding principles for responsible use. Meanwhile, the Communications Security Establishment has taken a more formalized step by articulating a comprehensive artificial intelligence strategy. This strategy commits the cyberspy agency to championing responsible AI innovation while simultaneously leveraging the technology to enhance its core intelligence and cybersecurity capabilities. The document details how AI and machine learning will be used to improve the CSE’s ability to analyze vast and complex datasets with far greater speed and precision, ultimately leading to better-informed and more timely decision-making to protect Canadians.
The CSE’s strategy was further reinforced by a clear message from its chief, Caroline Xavier, who affirmed that the agency’s approach would be “thoughtful and rule-bound,” with a commitment to “experiment and scale incrementally.” A critical component of this measured approach is the foundational principle of keeping “highly trained and expert humans in the loop” for all significant operations. The initiation of the NSIRA probe represented a pivotal moment, formally addressing the increasing reliance on these technologies. This review followed a 2024 recommendation from the National Security Transparency Advisory Group, which called for greater transparency. While agencies acknowledged this need, they cited security mandates as a limitation. NSIRA’s review, with its deep access to classified material, was uniquely positioned to bridge this gap, providing essential accountability without forcing the premature disclosure of sensitive operational details. This created a framework that balanced the immense potential of AI with the imperative to uphold Canadian values and the rule of law.
