Ethical AI in PR Requires a Human-Centered Approach

Ethical AI in PR Requires a Human-Centered Approach

The accelerating integration of artificial intelligence into the public relations profession has created a critical imbalance where technological advancement has significantly outpaced the development of essential ethical governance. This widening gap presents a profound challenge to the very foundation of the profession: trust. As AI evolves from a supplementary tool into an indispensable component of communication strategies, practitioners are confronted with an urgent imperative. They must move beyond the mere endorsement of ethical principles and begin the deeper, more challenging work of internalizing these values into their daily practice. To navigate this new landscape successfully, the industry must adopt a fundamentally human-centered approach, ensuring that technology serves as an extension of human integrity rather than a substitute for it, thereby preserving the ethical soul of public relations.

The Widening Gap Between Innovation and Governance

The Current State of AI in PR

The pervasiveness of artificial intelligence within the communications industry is no longer a future projection but a present-day reality that demands immediate attention. Current data reveals a startling trend: an overwhelming 91 percent of organizations worldwide now permit the use of AI in their communication and public relations activities. This near-universal adoption underscores the technology’s perceived value in enhancing efficiency, creativity, and outreach. However, this rush toward innovation is shadowed by a concerning lack of oversight. A sobering truth emerges from the same data, indicating that only 39.4 percent of these organizations have implemented any form of responsible AI framework to guide its use. This significant disparity between rapid technological integration and lagging ethical governance has ignited a critical and necessary conversation across the profession, forcing practitioners to confront a fundamental question. How can the immense power of AI be leveraged without simultaneously eroding credibility, damaging vital stakeholder relationships, and compromising the ethical integrity that underpins the entire craft of public relations?

This chasm is not merely a statistical anomaly but a reflection of a deeper systemic challenge facing the industry. The integration of AI has moved beyond simple task automation to become an integral component of the profession’s DNA, influencing everything from content creation and media monitoring to sentiment analysis and crisis communication. This deep embedding of technology has introduced a host of complex ethical dilemmas that traditional codes of conduct were not designed to address. Issues such as algorithmic bias, the spread of sophisticated disinformation, data privacy, and intellectual property rights now require a more robust and nuanced governance structure. The industry’s response has, by necessity, been largely reactive, with ethical frameworks often developed in response to emerging problems rather than in anticipation of them. This reactive posture creates a continuous cycle of risk, where the potential for significant reputational damage remains high. The urgency, therefore, lies not just in catching up but in establishing proactive, forward-thinking governance that can guide the responsible evolution of AI in PR for years to come.

The Rise of Ethical Frameworks

In a direct response to this growing governance deficit, leading industry bodies have begun to construct the foundational pillars of ethical AI usage. A significant milestone was reached in October 2023 when the International Public Relations Association (IPRA) introduced its Five AI and PR Guidelines. These were not intended as rigid technical manuals but rather as essential “ethical anchors” designed to ground professional conduct in a rapidly changing technological sea. The guidelines championed a set of core values critical to maintaining public trust, emphasizing the importance of honesty about AI utilization, ensuring transparency through clear disclosure and attribution of AI-generated content, and mandating the stringent protection of both confidential and copyrighted information. Crucially, they also highlighted the absolute necessity of rigorous human verification to mitigate inherent biases and potential errors in algorithmic outputs, alongside a proactive commitment to preventing and correcting misinformation. This initial effort provided a vital baseline for responsible practice.

Building upon this foundational work, the global discourse on AI ethics was significantly elevated in May 2025 at the Global Alliance’s Venice Symposium. This pivotal event marked a clear shift in the industry’s focus, moving the conversation beyond the practical application of AI tools to the more complex and pressing issue of its responsible governance. The symposium culminated in the creation of the Seven Responsible AI Guiding Principles, a comprehensive framework that was formally ratified through the Venice Pledge. The co-signing of this pledge by 24 influential member organizations, including the Nigerian Institute of Public Relations (NIPR), signaled a powerful, unified commitment from the global PR community. This collective action represented a maturation of the industry’s approach, transitioning from disparate, organization-specific guidelines to a harmonized, global standard for ethical AI deployment. It established a shared vocabulary and a common set of expectations for practitioners everywhere, setting a new benchmark for professional responsibility in the age of artificial intelligence.

A Global Problem with Local Imperatives

An African Perspective on AI Ethics

While these emerging global principles provide a universal framework, they resonate with a particular and profound urgency when viewed from an African perspective. The “Ethics First” mandate, for instance, is not an abstract ideal but a critical safeguard in a context where poorly designed or improperly trained AI systems could inadvertently reinforce colonial-era biases, perpetuate harmful stereotypes, or distort authentic African narratives. Such outcomes would not only be ethically unacceptable but would also corrode trust from within the very communities the technology is meant to serve. Consequently, the imperative to ensure that integrity always supersedes the rush for innovation becomes paramount. Similarly, the principle of “Human-Led Governance” is vital for addressing the unique challenges of privacy, bias, and disinformation on the continent. The poignant comparison of algorithmic transparency to the accountability of a “village elder’s counsel” emphasizes a deep-seated cultural value for collective responsibility, wisdom, and respect for local nuances—qualities that automated systems cannot replicate but must be governed by.

This localized lens sharpens the focus on the immense responsibility borne by individual practitioners and their organizations. In Africa’s diverse and often high-stakes media ecosystems, the potential for misinformation to incite social and political instability is a constant and serious threat. This reality transforms the principle of “Personal and Organizational Responsibility” from a professional best practice into a critical civic duty. Diligent fact-checking, a commitment to continuous learning, and unwavering vigilance are not optional extras but essential components of ethical practice. The stakes are simply too high for complacency. This context demands that PR professionals operating in Africa become exceptionally adept at scrutinizing AI-generated outputs, understanding their potential for misuse, and acting decisively to uphold the integrity of the information landscape. The responsibility is not merely to avoid causing harm but to actively contribute to a more informed and stable public discourse, a task that requires both technological literacy and deep cultural understanding.

The Call for Ubuntu in a Digital Age

The principle of transparency and openness in AI communication finds a powerful parallel within Africa’s rich oral storytelling traditions, particularly in the concept of the “griot’s duty to truth.” For centuries, the griot served as a historian, storyteller, and trusted custodian of a community’s heritage, with their credibility resting entirely on their commitment to truthfulness. In an era where deepfakes and synthetic media threaten to erode the very notion of objective reality, this ancient duty takes on a new, modern relevance. The non-negotiable disclosure of AI involvement in creating or disseminating information is essential to maintaining public trust and honoring this legacy. Furthermore, professional associations have a foundational role to play in leading structured education and upskilling initiatives. These programs must do more than just impart technical skills; they must blend global technological advancements with local wisdom and a strong ethical grounding, especially for Africa’s large and dynamic youth population, preparing them to be responsible digital citizens.

This vision extends beyond internal capacity-building to the global stage, where it is imperative that African communication professionals actively participate in and shape international policy forums on AI governance. Their role must transform from that of passive adaptors of global rules, which may not account for local realities, to influential architects of equitable and culturally sensitive frameworks. Finally, the application of AI must be guided by the spirit of ubuntu—a profound philosophy emphasizing interconnectedness, shared progress, and collective well-being. This human-centered application directs AI toward addressing the continent’s most pressing societal challenges, such as creating employment opportunities, reducing health inequities, and building climate resilience. By embedding the principles of ubuntu into technological deployment, AI can become a powerful tool for the common good, ensuring that innovation is always tethered to the advancement of shared humanity and a more just and prosperous future for all.

Bridging the Divide with a Human-Centered Model

From Principles to Practice

Despite the comprehensive nature of these emerging principles and the widespread ratification of pledges, a fundamental truth remains: principles without practice are powerless. The act of endorsing ethical frameworks or signing international accords, while important as a declaration of intent, is insufficient on its own. These high-level agreements risk becoming an “echo without substance” if they are not deeply integrated into the daily, tangible applications and workflows of every public relations professional. Using the metaphor of a baobab tree, one cannot benefit from its life-sustaining shade by merely admiring its grandeur from a distance; one must engage with it directly. Similarly, responsible AI requires a definitive transition from boardroom endorsements to the front lines of professional practice. It is in this crucial gap between declaration and consistent, daily application that the greatest risks to the profession’s integrity and credibility lie, and it is this gap that must be urgently and effectively bridged.

The urgency of closing this gap is starkly illustrated by recent industry research. A 2025 survey conducted by PRWeek and Boston University revealed a troubling paradox: while 71 percent of professionals reported using AI to drive innovation and efficiency, significant ethical lapses persisted in 55 percent of firms that lacked formal governance policies. This data provides empirical evidence that good intentions are not a substitute for structured, actionable frameworks. It is in response to this documented need that a new model for ethical AI application has been proposed, one grounded in the concept of “AI as augmented intention.” This concept recognizes that AI systems are not neutral; they are powerful amplifiers that absorb our communication patterns, magnify our inherent cognitive biases, and reflect our ethical blind spots back at us with unprecedented scale and speed. This understanding makes it clear that the solution is not to simply create more rules but to re-center the entire process of AI application in the human essence, ensuring that technology remains a tool guided by conscious human intent.

The 3H Model: Head and Heart

To effectively translate principle into practice, the 3H Model offers a practical framework designed to center AI application on human faculties. The first pillar, The Head (Mind before Machine), firmly asserts the primacy of human intelligence in the communication process. This principle goes beyond simple oversight to mandate that core strategic thinking, critical analysis, and the crucial assignment of meaning must always precede any algorithmic process. In this model, AI serves as a powerful tool for initial drafting, data analysis, and pattern recognition, but the ultimate strategic decisions—the why behind a campaign, the nuance of its messaging, and its potential societal impact—must remain firmly within the human domain. As the neuroscientist Antonio Damasio articulated, “We are not thinking machines that feel; we are feeling machines that think.” This insight underscores the irreplaceable role of human cognition, informed by experience, intuition, and contextual understanding, in navigating the complexities of public relations.

The second pillar of the framework is The Heart (Soul within the System), which represents the essential ethical and emotional core that technology inherently lacks. While an artificial intelligence system is capable of processing vast quantities of data with incredible speed, it is the human professional who processes dignity, understands cultural context, and operates with empathy. The Heart embodies the non-negotiable necessity of integrating cultural sensitivity, genuine empathy, and unwavering transparency into every AI-driven initiative. It acts as a moral compass, ensuring that the pursuit of efficiency and innovation does not come at the expense of human values. As the source material powerfully argues, innovation that is devoid of this deep cultural and ethical awareness is not genuine progress; it is a form of “arrogance” that can lead to significant brand damage and a breakdown of public trust. This pillar ensures that communication remains authentic, respectful, and resonant on a fundamentally human level.

The 3H Model: The Hand

The third and final pillar of the framework, The Hand (Human in the Loop), emphasizes the critical importance of execution with accountability. This principle mandates that a human professional must remain actively and meaningfully involved throughout the entire process, from initial concept to final implementation. This is not a passive role of mere supervision but an active engagement in oversight, co-creation, and responsible deployment of AI-powered tools. The Hand ensures that there is always a clear line of human accountability for the outcomes of any communication effort. The infamous Facebook–Cambridge Analytica scandal serves as a prime and cautionary example of this principle’s importance. That crisis was not merely a technological failure; at its core, it was a profound human failure of governance, ethics, and accountability. This historical precedent demonstrates that technology, no matter how advanced, cannot absolve professionals of their ethical responsibilities.

By operationalizing “The Hand,” practitioners transform from passive users of a technology into active, responsible agents of its application. This pillar ensures that AI-generated outputs are rigorously vetted, fact-checked, and aligned with both strategic goals and ethical standards before they are ever released to the public. It reinforces the idea that AI should be viewed as a sophisticated assistant that supports and augments human judgment but never becomes a substitute for it. The professional’s expertise, experience, and ethical discernment remain the final and most important arbiters of quality and appropriateness. In this way, the 3H Model ensures a balanced and responsible partnership between human and machine, where technology amplifies human capabilities while human wisdom and ethics guide its power. Ultimately, this pillar is about owning the final outcome, regardless of the tools that were used in its creation, and upholding the professional’s ultimate duty to the public good.

The Path Forward: Human Intentionality in an Automated World

The journey toward ethical AI in public relations began with a series of necessary but ultimately insufficient actions. The signing of pledges and the ratification of principles were crucial first steps that signaled a collective awareness of the challenges ahead. However, it became clear that for these declarations to be more than hollow rhetoric, a practical and accessible framework was needed to transform high-level ideals into consistent, everyday action. The 3H Model—where the Head planned with strategic foresight, the Heart guided with ethical empathy, and the Hand executed with unwavering accountability—provided that essential bridge from principle to practice. This human-centered approach ensured that AI served as a powerful extension of human intention rather than becoming a detached and unaccountable substitute for human judgment. The future of public relations, it turned out, was not written by algorithms, but by the thoughtful, intentional, and ethically grounded professionals who learned to wield them with wisdom and care.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later