Skip links

The Latest Developments in AI Ethics

The Latest Developments in AI Ethics

While the lion’s share of ethics attention has been focused on the top-rated soap opera “Open AI,” there have been some other noteworthy developments over the past few weeks when it comes to AI and ethics.

At the end of November, the PRSA and its’ Board of Ethics and Professional Standards released their guidelines on AI ethics: Promise & Pitfalls: The Ethical Use of AI. It is definitely worth a read and takes a different approach from other AI ethics guidelines, such as the PR Council’s Ethics Guidelines on Generative AI Tools (note: I co-led the development of that code) and the CIPR  and CPRS Ethics Guide to Artificial Intelligence in PR. It provides a conversational and easily accessible guide that looks at the pros and cons of common uses of generative AI by PR pros.

While I disagree with some of the wording and examples, it hits all the key issues, and it challenges professionals to bring a critical lens to all their activities.

The guidelines state, “As AI technologies continue to proliferate, it is incumbent upon PR practitioners to know how and when to ethically use them.” At C+C we couldn’t agree more.

The guidelines hit on many important themes – transparency, disclosure, accuracy and bias. Following are four key takeaways:

1. Trust but verify – The section that states, “As AI becomes more prevalent in financial analysis, professionals in investor relations or corporate communications will find it more challenging to misrepresent or fabricate results. However, it’s essential to note that not utilizing AI’s insights doesn’t inherently imply deception.” …confuses me a bit. We should never have been misrepresenting information. I don’t know many IR or corporate comms folks that would ever accept fabricated results.

Let’s make it more relevant. Anyone who knows me knows I love the Advanced Data Analytics function of ChatGPT 4.0. It is as big a game changer as ChatGPT itself. But one thing to keep in mind – AI can make errors with the data you feed it, just like a human – so always double check the analysis and don’t hesitate to ask any generative AI solution to defend its analysis.

2. Five questions to ask for an effective AI Cost Benefit Analysis – I was excited about the cost-benefit analysis section of the guidelines. But there wasn’t enough cost benefit analysis. (As the “PR pros need to stop saying ‘I hate math’ person”, I want to see numbers in any CBA). What C+C counsels our team and clients to do is to quantify the benefits AI can bring when applied to any activity or process. Always ask yourself five questions.

  1. How much time will it free up?
  2. How much time will you need to spend in review?
  3. What are the startup costs? Training costs? Ongoing costs?
  4. What are the risks and exposure?
  5. Can you quantify an expected return beyond time?

By modeling this, you can determine if a specific use of AI is right for your organization at this time.

3. Act, don’t just watch – I love the reminder that “We must hold to our core value of honesty and be willing to identify and openly admonish those who pretend to be something they are not.” There are no communication spectators when it comes to AI ethics – we need to understand how our companies are using it, and question how others are using the technology.

4. Train, Train, Train – The guidelines call for organizations to “Educate employees to critically think about ethical challenges and consistently apply ethical guidelines when working with AI.”

Let me add to that. Everyone in your organization needs regular AI training. This is not a one and done situation. If you do AI ethics training once, it is the same as going to the gym once. You will feel sore and not have accomplished all that much.

Beyond the Guidelines – A Bonus ChatGPT Tip: While I encourage all communication professionals to know their code of ethics intimately, there is a prompt you can provide ChatGPT that can help remind you if you are straying into ethics gray areas:

“From now on, whenever I provide a prompt, along with the answer, please provide feedback on how my request or actions could potentially violate PRSA’s Code of Ethics.”

This is never a substitute for knowing and applying the Code of Ethics yourself – but it might just uncover something you hadn’t considered.

Let’s see what is next.

Full disclosure: I was sent a draft of the “Promise & Pitfalls” document earlier this year and provided feedback and poked holes but was not involved in their development beyond that.