With the next territory election still years away in 2028, the Australian Capital Territory government is navigating the treacherous waters of artificial intelligence in political campaigning by adopting a cautious “watching brief” rather than rushing to implement new legislation. This deliberative stance places the ACT at a critical crossroads, balancing the universally acknowledged threat of AI-driven misinformation, such as deepfakes, against the formidable challenge of crafting laws that can keep pace with a technology evolving at an exponential rate. As stakeholders across the political and academic spectrum weigh in, the central debate revolves not on whether a threat exists, but on how to effectively counter it without stifling legitimate technological innovation or political expression. The government’s decision to observe developments elsewhere before committing to a legislative path underscores the complexity and high stakes involved in regulating the intersection of AI and democracy, a choice that could set a precedent for how jurisdictions handle the digital integrity of future elections.
The Dilemma of Rapid Technological Advancement
A broad consensus has formed among key stakeholders that the proliferation of advanced artificial intelligence presents a significant and immediate risk to the integrity of the electoral process. The ACT Electoral Commission, along with expert witnesses contributing to a government inquiry, has voiced strong concerns about the potential for deceptive deepfakes to irrevocably sway public opinion. The primary fear is the deployment of highly realistic, fabricated content depicting candidates in a false light close to an election day. Such misinformation could spread rapidly across social platforms, inflicting political damage that cannot be adequately refuted or corrected before voters head to the polls. The Commission has previously noted its own limitations, highlighting a lack of explicit legal authority to intervene or act on such concerns, leaving a regulatory vacuum where malicious actors could potentially operate with impunity. This shared understanding of the danger has created a sense of urgency, yet it has not produced a clear path forward on how best to legislate against it.
In response to these concerns, the ACT government has articulated a position of calculated hesitation, acknowledging the gravity of the threat while also recognizing the nuanced nature of the technology. Attorney-General Tara Cheyne detailed the government’s perspective, emphasizing that while AI-powered misinformation poses a serious risk, the technology also holds potential as a beneficial tool for political engagement and communication. This duality contributes to the reluctance to enact swift, restrictive laws. The core of the government’s wait-and-see approach is rooted in the rapid and unpredictable evolution of AI itself. Officials are wary of creating specific legislation that targets current forms of generative AI, only to see it become obsolete and ineffective as new, more sophisticated technologies emerge. Consequently, the ACT plans to meticulously monitor how AI is managed in other elections, both domestically and internationally, gathering insights and learning from the successes and failures of other jurisdictions before drafting its own regulatory framework for the 2028 election cycle.
Exploring Regulatory Pathways and Pitfalls
The conversation around regulation has been complicated by compelling arguments against a straightforward prohibition of generative AI in the political sphere. ANU student researcher Ethan Zhu, for instance, cautioned the inquiry against an outright ban, contending that such a move would be overly broad and potentially detrimental to democratic discourse. He argued that generative AI technologies have legitimate and even beneficial applications, from creating engaging campaign materials to enhancing voter outreach. More importantly, a blanket prohibition would inadvertently eliminate the use of these tools in established forms of political expression, such as parody and satire, which play a vital role in holding public figures accountable and fostering political critique. This perspective introduces a crucial layer of complexity, suggesting that any effective legislation must be surgical in its approach, targeting malicious deception without stifling creativity or legitimate commentary. The challenge lies in drawing a clear line between harmful misinformation and protected political speech.
In the search for a viable solution, contrasting legislative models from other jurisdictions offer potential roadmaps. The South Australian government, for example, has taken a decidedly proactive stance, moving beyond observation to implementation. It recently passed a law that explicitly bans the use of unauthorized deepfake technology in political advertisements, creating clear legal boundaries for campaign conduct. Furthermore, the South Australian legislation mandates that any political ad utilizing AI must be clearly and conspicuously labeled as such, ensuring transparency for voters. To give these rules teeth, the law includes significant financial penalties for any violations. This approach stands in stark contrast to the ACT’s more cautious “watching brief.” It represents a decisive attempt to get ahead of the problem by establishing a clear regulatory framework well before the next election, prioritizing immediate safeguards over the flexibility that comes with waiting for the technological landscape to settle. This provides a tangible example of an alternative strategy focused on proactive prevention.
A Future-Proof Framework Based on Intent
An alternative, principle-based solution was proposed that aimed to transcend the limitations of technology-specific rules. ANU researcher Mark Fletcher suggested that focusing legislation on particular technologies like AI was akin to “playing whack-a-mole,” where new threats would continuously emerge as soon as old ones were addressed. Instead, he advocated for a framework that targeted the underlying malicious intent. By focusing on the “fundamental mischief” of deception in a political context, such a law would remain relevant and effective regardless of the specific tool used to create the misinformation. This approach would have covered everything from basic image manipulation in Photoshop to the sophisticated deepfakes of today and the as-yet-unimagined AI advancements of the future. This focus on intent rather than method offered a path toward creating durable, future-proof legislation. As the inquiry moved toward its conclusion, with a final reporting date yet to be set, this proposal remained a key consideration for crafting a lasting defense for electoral integrity.
