The International Communications Consultancy Organisation has ratified The ICCO Warsaw Principles for the ethical use of AI in public relations during the 2023 ICCO Global Summit in Warsaw.
The ICCO Warsaw Principles aren’t a detailed ‘how to’ of what needs to be included in an AI policy, but codify the key principles that public relations and communications professionals need to be aware of when creating their own policies.
The ICCO Warsaw Principles declaration also say: “AI should complement, not replace, the invaluable expertise, judgment and creativity that PR professionals bring. AI’s implications will keep evolving; thus, adaptability and vigilance are paramount.” This reinforces my point that AI isn’t going to replace PR professionals, but is going to replace PR professionals who don’t use AI.
The ICCO Warsaw Principles were launched in a panel session with ICCO ethics panel chair Christina Forsgård, former ICCO president Maxim Behar and Mary Beth West. We’ll be announcing some AI and PR ethics news from Christina, Maxim and myself within the next few weeks.
ICCO Warsaw Principles
The ratified principles underscore the critical importance of:
- Transparency, disclosure, and authenticity: Mandating clear disclosure when generative AI is employed, especially when crafting reality-like content.
- Accuracy, fact-checking, and combatting disinformation: Highlighting the need for rigorous fact-checking, given AI’s potential for disseminating misinformation and producing disinformation.
- Privacy, data protection, and responsible sharing: Prioritising data protection, compliance, and responsible content dissemination.
- Bias detection, mitigation, and inclusivity: Advocating for the detection and correction of biases in AI-driven content and the promotion of inclusivity.
- Intellectual property, copyright compliance, and media literacy: Stressing the respect for intellectual property and copyright laws.
- Human oversight, intervention, and collaboration: Reinforcing the necessity of human oversight in AI-powered processes.
- Contextual understanding, adaptation, and personalisation: Encouraging tailored content approaches for different audiences and channels.
- Responsible automation and efficiency: Championing AI for efficiency without compromising on ethical standards.
- Continuous monitoring, evaluation, and feedback: Advancing continuous assessment and stakeholder engagement for optimal AI use.
- Ethical professional development, education, and AI advocacy: Promoting continuous learning, ethical AI advocacy, and best practice sharing.
As with AMEC’s Barcelona Principles for the measurement and evaluation of communications all of the ICCO Warsaw Principles for the responsible use of AI in public relations need analysing so companies and organisations can develop their own policies that meet the principles.
All of the AI ethics policies, frameworks, guidelines, manifestos I’ve analysed include ‘transparency’, which is absolutely right. But we also need to unpick what transparency means. Does it mean disclosing an AI grammar checker or writing aid? Does it mean disclosing if generative AI was used to help generate ideas, concepts and questions. How much editing and changes need to happen before it is ‘created’ by a human, rather than generative AI?
All of the policies or framworks also included warnings about the risk of bias. The data sets that large language models (LLMs) are trained on are by definition biased because humans are. But you, the human creating and using the prompt, are also biased even if it’s unconscious bias. Is the AI worse than you or better? Can we use AI to actually help identify and eliminate unconscious bias?
It’s question I posed at UK Music’s launch of its music manifesto which identifies what the music industry trade body wants the government to do to support the music industry. It includes a call for transparency and clear labelling of music created by genearative AI. But if a musician uses generative AI to quickly test and idea or concept and then goes away to create music based on the concept then does that need to be disclosed? If a singer song writer writes a song and plays the guitar, butuses AI to create the accompanying drums does that need to be disclosed?
Copywright is an ethical, moral and legal issue. But the three don’t always agree. It’s never going to be as simple as saying that AI consumes copyright content and therefore is bad for copyright owners. You’re doing that right now. Every human learns from copywright content and it influces what they create. Where’s the dividing line between you doing it and a machine doing it?
And what about the content that AI generates? In the USA copywright law says it isn’t protected by copyright. In the UK and EU copywright law says it is protected. This returns us to the initial transparency question as how much does AI generated content need to be changed before it becomes the creation of a human.
There aren’t easy or even right/wrong answers to any of these questions. That’s why the ICCO Warsaw Principle are an excellent building block, alongside other tools and frameworks we use, to help companies and organisations create thier own policies that work for their unique circumstances.
Christina Forsgård, founder of Netprofile Finland and ICCO’s ethics chair, led the development of the ICCO Warsaw Principles.
“As we delve deeper into the age of AI, the urgency to uphold trust in our profession intensifies, particularly considering deep fakes, conspiracy theories and hybrid threats against democracies. Our unwavering commitment to the ethical foundation of public relations is paramount. The Warsaw Principles are more than just guidelines; they serve as a beacon for communications professionals everywhere. Equipped with AI, we have the responsibility to prioritize science-based facts in our communications, ensuring that every message is transparent and trustworthy.”
Christina Forsgård, ICCO ethics chair
Let us help you create an AI in PR policy and strategy
You can learn more about the ethics and use of AI in public relations and communications in the CIPR Humans Needed More Than Ever report and the Purposeful Relations Global CommTech Report.
The Purposeful Relations team is available to provide consultancy and advice on implementing AI for PR and communications to take advantage of the opportunities and reduce the risks. I also have some slots left this year to speak at conferences or run masterclasses and training for your team on AI in corporate communications and public relations.