Public Relations Review Podcast

PR Professional's Guide to Ethical AI Usage and Client Trust

Peter C Woolfolk, Producer & Host w/Michelle Egan & Cayce Myers Season 5 Episode 146

What do you think of this podcast? I would very much appreciate a review from you!! Thank you!

Embarking on an ethical odyssey, host Peter Woolfolk engages with PRSA board members Michelle Egan and Dr. Cayce Myers to examine the delicate interplay between AI advancements and moral directives in public relations. Prepare to navigate the terrain of AI tools, such as ChatGPT and AI avatars, with a compass set firmly on maintaining transparency and verifying AI-generated content. Their dialogue ventures into the realm of responsibility as we tackle the thorny issues of AI-driven misinformation and the frameworks necessary to empower PR professionals to wield these powerful tools without losing their ethical bearings.

Venturing further, we confront the hidden crevices of AI privacy and security concerns, discussing the nuances of navigating client confidentiality against the backdrop of fierce industry competition. Through our exploration, we illuminate the varying degrees of openness in AI platforms and the potential hazards they present, from the exposure of personal details to the safeguarding of proprietary data. We share resources from prsa.org and advocate for hands-on experience with AI to fortify one's understanding of its capabilities—all while thanking our insightful guests, Michelle Egan and Cayce Myers, for their enlightening contributions to this pivotal discussion.

We proudly announce this podcast is now available on Amazon ALEXA.  Simply say: "ALEXA play Public Relations Review Podcast" to hear the latest episode.  To see a list of ALL our episodes go to our podcast website: www. public relations reviewpodcast.com or go to  or
Apple podcasts and search "Public Relations Review Podcast."  Thank you for listening.  Please subscribe and leave a review.

Support the show

Announcer:

Welcome. This is the Public Relations Review Podcast, a program to discuss the many facets of public relations with seasoned professionals, educators, authors and others. Now here is your host, peter Woolfolk.

Peter Woolfolk:

Welcome to the Public Relations Review Podcast and to our listeners all across America and around the world. This podcast is now ranked by Apple as being among the top 1% of podcasts worldwide, so thank you to all of our guests and listeners for making this happen. Now question artificial intelligence has now become an integral component of public relations. There's text-to-speech chat, gpt, gemini, avatars and video. There are just a few. Are there rules and regulations we should be aware of when using this technology? Well, my guests today say, under some circumstances, that answer is yes. First there's Michelle Egan, apr and fellow APR, and she joins us from Anchorage, alaska, and there's Dr Casey Myers, ap and JD, and he joins us from Virginia Tech in Blacksburg, virginia. Welcome to the podcast.

Michelle Egan:

Thank you, so excited to be here. Okay, yes, thank you Peter.

Peter Woolfolk:

Well, let me just say this now that I certainly use artificial intelligence and certain products here in my podcast. It produces my transcripts for me, it writes blogs, it even gives me five titles that I can use on the podcast, that I can choose to use them or change them or whatever. And I've also used some text-to-speech on several episodes, and, yes, I do also used some text-to-speech on several episodes. And, yes, I do. I use the AI avatar videos to promote each individual episode. So, basically, what are the concerns that PRSA has with AI in public relations?

Michelle Egan:

Well, I'm glad you had us on to talk about this issue. Peter Casey and I are both members of the board of PRSA and last year I was chair and Casey was the liaison to our board of ethics and professional standards, and that group BEPS we call it worked on some guidance for professionals who are using AI. We're excited about it, we love the creativity of it and we want to make sure that people are paying attention to the ethical guidance that comes from our code of ethics and that really touches on things like disclosure about the use of AI, doing your own research, making sure that you avoid any conflicts of interest, that you're verifying what you're using. So all those things come into a guidance document that Casey worked on and we released last year. That's available to everyone, member or not. So when you're using AI, experimenting with it, employing it in your business, you have some guidance about what to look out for and how to use it in the most effective way.

Michelle Egan:

One of the things that I noticed last year when I was speaking as chair of TRSA is, early in the year, people we would ask you know how many of you are using ChatGPT or a similar tool?

Michelle Egan:

And you just get like some real reluctant, two, three real reluctant hands go up in the audience and then by the end of the year you get quite a few and people would say to me it kind of feels like cheating, right, that's what they would say about using a tool like ChatGPT or any of the others that we talk about, and giving people a framework helps them to use the tools and then not feel so much like maybe it's inappropriate.

Michelle Egan:

The other thing that I think is really important that people could use as a resource is a recent publication that PRSA did on mis and disinformation, and that is such a huge issue in our society. In fact, the 2024 World Economic Global Risks Report puts myths and disinformation at the very top of global risks, above climate change, polarization, all the other things that you typically think of, and that's because, when it's fueled by AI, the propensity for and disinformation to proliferate at scale is really great. So we've got a couple things out there from PRSA, I think that are interesting to members and non-members and also help kind of provide some guidance.

Michelle Egan:

But, I'm going to let Casey talk a little bit more about this, because he's got some real deep expertise in this area.

Peter Woolfolk:

Let me just say this because, in terms of disclosure, I guess maybe we might need some help there. What sort of disclosure are we talking about? So, for instance, I use ChatGPT. For instance, I might say write down something like give me a brief outline for identifying A, b, c and D and whatever those elements are, and it will provide me that I will then take a look at it and then I'll make any adjustments or adjustments and modifications that I need that help me be more precise about what it is I want to say. So sometimes I use it as a guideline rather than me trying to think of everything, and sometimes it gives me some ideas I hadn't thought about. So I guess we might need some help on what disclosure is. I mean disinformation. I certainly understand that. So that's one thing I would have some questions about. How does one look at the use of it in that fashion?

Cayce Myers:

How does one look at the use of it in that fashion? No-transcript, there's not a law that's going to mandate a certain level of disclosure. Now, that may come and that may be something that you may see in a kind of work product, particularly around visuals, where there is maybe watermarking that comes on visual AI content.

Cayce Myers:

There are some disclosures that are mandated in the use of AI for communications, for political advertising. We've seen that kind of concern out of disinformation in the 2024 campaign, presidential year. Globally there's a concern around that. So the disclosure there is mandated. But in the day-to-day operations of public relations that is going to be an individualized decision.

Cayce Myers:

Now there are people that will say you should disclose because that's transparent.

Cayce Myers:

There's transparency in disclosure and the insidious nature of disinformation is that people can't tell what's real and what's fake and you have to be honest with your audiences and you have to be honest with your audiences. But there are others who will say AI is a tool. I use it as a tool. I don't use it as a substitute for my own work product. I use it as a tool to enhance my work product, to help me complete tasks. So, just like what you were mentioning menial tasks like creating a check sheet, brainstorming, et cetera would you necessarily disclose that? That's been part of your process and I think that's a very individualized decision that has to be made by practitioners. But it is something that increasingly, is the number one question that I get what do we do with AI and disclosure? And I think that ultimately, what's going to happen is that we as an industry, as a public relations industry, have a lot of power right now because we don't have a legal mandate on disclosure in most circumstances.

Cayce Myers:

We have to make that decision for ourselves as professionals, and I think that there's a lot of things to weigh in that decision.

Peter Woolfolk:

Well, it's interesting.

Michelle Egan:

Casey, I think you've probably heard this as well. I've talked to people who work in public relations agencies, and one of the things they're doing is putting in their contracts a general disclosure that we may occasionally use AI in working on a product, and that's one way people are addressing it. But, casey's right as with any other ethics issue, there's a lot of personal latitude, and what you described again is making your work better, or giving you the space to use your brain for more powerful things than creating a checklist or doing a small amount of research, and so it's a little bit different than a misinformation type campaign.

Peter Woolfolk:

Well, you know, one of the other things that Casey just mentioned is that the little mark that goes on the avatars.

Peter Woolfolk:

I use the free version and that comes with it, so you know it's only something about a minute, a minute and a half, to say you know, here's what we're going to be covering in our next episode, that sort of thing. And the little trademark, or whatever it is, is down in the lower corner. That does not come up. So that's really just advertising the fact that we're going to have this podcast episode and this is what we're going to be talking about, that sort of thing. And I can certainly see, as a matter of fact it has been shown on television where some very prominent people have the misinformation, because it did look like, well, it was them, but they were saying words that they never uttered. So I can see where that can cause and will cause a huge, massive amount of problems, because that is misinformation at the highest levels that we do not need to have.

Cayce Myers:

Let me just jump in here real quick about that. The issue really is that we as PR practitioners want to be ethical. We want to do the right thing. The disinformation that's out there, those people will never disclose, if they can get away with it, because they're bad actors. They want to produce content that is meant to deceive. You take, for instance, voice cloning. There are thousands of scams that use voice cloning to get people to send other people money. It sounds like your daughter's calling you. She's been kidnapped, you need to pay a ransom or something. It sounds like her because they only need a few seconds of audio to voice clone. And so those folks aren't going to disclose because they're bad actors.

Cayce Myers:

Now, we in the PR industry aren't in the disinformation business. We're in the transparent communication business and we want to uphold our professional ethics at the highest level. But it does beg a question of what level of disclosure is required. So, for instance, let's say, a lot of folks they're using AI to edit, they're using AI to kind of do what normally Photoshop would maybe do for a picture. Does that need to be disclosed to the public? You could give a general disclosure, but then again, we don't disclose a lot of things that we use, like, for instance, if you use Grammarly or Spellcheck, that's not disclosed. If you use a template that's already preexisting in Microsoft, you don't necessarily disclose that. So there's a counterpoint to it of, like, well, how small does the use of AI have to be before you don't disclose? And I think that's something that's going to be very individualized.

Peter Woolfolk:

I think that the industry doesn't have an answer for that quite yet. Would the response to that, or an answer to that, have to be? You know how much does it impact someone else making a decision if they know that you did it or, compared to AI, had done it. If what they are producing for you, you have to make a decision on whether to or not to accept it, I would think that, whether they use or did not use AI, would be almost imperative to let them know.

Cayce Myers:

I think that's a great point. I think that's 100% correct, and you know it used to be just to give you an example, with these deep fakes. It used to be you'd hear the saying seeing is believing. Well, now you can't believe what you see, right, you've got to individually check it. So I think, at the point that you're creating different realities for other people and informing their perception of the world, use of AI for just sort of functionary tasks but goes and you're using AI to actually create dialogue within society, that may have a huge resonance. So absolutely.

Michelle Egan:

And our guidance suggests that you are responsible for the information that you disseminate. So you're responsible to validate that it's accurate, to make sure that the sources are checked, that those sources are disclosed wherever possible. So you know, it really is part of the guidance as well to say, at the end of the day, use these tools and you're still responsible for the information that you're sharing.

Peter Woolfolk:

I guess my question from that is that it's more imperative to be forthright about whether you did or did not use AI if it is being used to help someone make a decision, and particularly if they're paying you for that, but if it's someone making a decision based on what you produce using some form of artificial intelligence, it should be imperative that AI be revealed, that it was used in this development process. Is that close to what it is that we're trying to get done here?

Michelle Egan:

I think that's an interesting way to frame it. So you're getting. You are getting at the point of what is the tool being used for, what is the information being used for? So I think, that would be a good guideline, Casey.

Cayce Myers:

I think that's. I think that is what he's. I think what you're getting at there is whether or not the tool is used in a way that is going to have massive impact on the person receiving the content. And if you're going to receive content and the tool is used in a way that's going to have impact and shape their opinion, then you should disclose that. I would go a step further and say that also, when you are using AI to process information, you have to be very careful that you're not overly reliant on that AI, because AI is a tool, right, just like spellcheck is a tool, just like the Internet is a tool. They have a lot of things that they get right. They have a lot of things they get wrong, and so that gets to the kind of larger question.

Cayce Myers:

A lot of pr practitioners will ask me out on the road. They'll say well, you know this is going to take our job. Well, if you're operating your job in a way where you could just ai can do it, then maybe you're very replaceable, right. But if you're, if you're operating your job where you know you bring a lot of talent, a lot of insight, a lot of knowledge, you're able to strategize, and AI is just part of your toolkit, then I don't think that AI can replace that person, because that person's got value added by what they know and what they can do, because an AI brain and a human brain works totally differently. Because an AI brain and a human brain works totally differently. And so we bring value as an industry when we bring ourselves into that conversation and ourselves into our work product to ensure that it's going to be something that really is honest, transparent, forthright and also effective.

Peter Woolfolk:

You know what I was just at a financial services. Go ahead.

Michelle Egan:

I was just at a financial services conference. Yeah, at this conference I was just at, one of the speakers was, of course, speaking about AI. She was a former Google decision architect and she used this great metaphor imagine you have a thousand-page book and you've read the book and so you understand the storyline, all the research that's in it, all the context of what's been written, and then you are provided with a one page summary. So the AI is creating a one page summary. That one page summary cannot capture all of the context, all of the background, all of the information that's in there. Can it be helpful? Absolutely. But the PR practitioner, or whoever the responsible user of AI is, has consumed the book right and can convey the context and all of the information that goes into the one-page summary. So I think that's a powerful way to think about our role.

Peter Woolfolk:

You know, the other thing I think about as I listen to that is that you know, we have experience in a lot of different things and sometimes, as we're putting together a project or trying to resolve some issues, our experience kicks in to say, hey, you know, based on my experience, I think we should do A, b, c and D experience. I think we should do A, b, c and D. Well, a lot of times I would say perhaps maybe AI does not have the same experience and they can't make that sort of contribution, you know, based on what information has been put into it to respond with. So I think what you just said earlier about having a thousand page book and a one-page description helps answer that question.

Cayce Myers:

I tell a lot of folks when we're talking about AI is that AI is based on an algorithm with data points that are entered. Ai is only going to be limited in its response based on those data points and on its algorithm. That's why it's so important to have a good AI platform, because it can be very biased. But as a human being, we have experiences, we have thoughts, we have identity, we have engagement with other people. We may go get a cup of coffee, we may go get drinks after work, we may go chat with somebody in the hallway. We've worked for, however, many years in our business and that provides our foundation for our decision-making. We also have a gut check. Ai doesn't have that. It only has the algorithm and the data which is going to crawl. So you know our intuition. That is how a lot of decisions are made and subsequently, studies show that intuition and experience and being able to make decisions like that very fast is typically the way to make a right decision.

Cayce Myers:

So, you can't discount the human being. That always gets me with folks saying, oh, AI, take over. You can't discount the human because the human being brings so much more to the table.

Peter Woolfolk:

Well, and that's the very point that I was trying to make you know, based on, as I call it, experience, what has worked and what might not work. Based on because I've actually been through it, so I know the answer to that particular question.

Michelle Egan:

Yeah, I was just going to ask Casey to share what some of the top issues are, that he hears about it because he's out speaking and in a university and very much engaged, you know, with lots of folks who have questions about AI.

Michelle Egan:

I know there are other issues besides the disclosure.

Michelle Egan:

One of them, I'll say from my perspective in my professional practice, is safeguarding confidences. So I cannot I work for an oil and gas company. I cannot put my company's information into a tool and ask it to summarize something for me or, you know, create new information unless I am absolutely sure that that information is not going to be shared, you know, beyond the company, and I don't have anything that provides me that assurance now, and so I'm not able to use AI. If you think about ChatGPT or one of the other tools that Microsoft offers, like Copilot I have to have the assurance that the information I'm sharing is not going to create a threat for the company in terms of, like, a cybersecurity threat or an operational threat. So I know that's one big issue that I face that is addressed in our PRSA guidance, just to make sure that you have a responsibility, whether it's your employer or your client, to protect their information, and so you know you have to beware of where you're putting the information and what's being done with it.

Cayce Myers:

So I tell people this you know you think about AI privacy like a door. You know the door can be shut and it can be in various stages of open right and it can be fully open. And so you have to understand the platform, particularly in generative AI when you're inputting data, if that data is going to be absorbed into the platform itself. So, for instance, let's say that I was going to create some sort of payroll structure using AI and I put everybody's social security number on my AI query and had them organize it. You know that could absorb those numbers, absorb that identifying information.

Cayce Myers:

That's one of the reasons why a lot of hospitals and a lot of medical PR people are more reluctant, I think, to use AI because of HIPAA concerns, and so a lot of folks have turned toward these proprietary platforms because they want to have the security of their information not being taken and they have privacy violations. There's also a competitive aspect to it, whereas if I put in a query and it gets me an output based on my query, that output could be available to a competitor if they put in a similar query, and so therefore, I kind of lose a competitive edge. So privacy is a big thing within AI. You have to be deliberative and I think that we right now are not in a space. We know it's important but we haven't talked about that open versus closed system AI. I think enough in the industry to really understand what does that mean in terms of just safeguarding our clients, safeguarding our people, that we're communicating with their information, and certainly proprietary information of a company if you're in-house very important.

Peter Woolfolk:

Well, this has been a very interesting conversation. Is there anything that we've actually missed in this discussion?

Michelle Egan:

It's been a good overview of the issues. I would definitely encourage people to go to prsaorg. We have some very thought-provoking information there about AI. There's a flash page with lots of content and access to webinars and whatnot and to this guidance and it's and much of it is available to anyone, so happy to be a resource on this. You know, have the organization be a resource on on AI and on this and this information. Well, I'm actually very happy that you said that.

Peter Woolfolk:

That's one of the things I did want to say that you let us know about the information and materials that are available. I'm sorry, case pick up before you.

Cayce Myers:

I was just going to say to your listeners that AI can be daunting and there's a lot that is going to change.

Cayce Myers:

You know, if we came back a year from now and had this conversation, it would be a different conversation. I mean, if we came back a month from now, it may be a different conversation because it's rapidly evolving technology, but for those in PR that are looking to get into this conversation and want to use AI, what they need to do is they need to just try it out low stakes, get on a free platform, just see what its capabilities are, and it'll give you a better sense. It's like learning how to drive a car from reading a book versus getting in there and putting the keys in the ignition. You put the keys in the ignition and you get it on the road. You'll learn a lot more. So I think there's a lot of positives for AI. There's a lot of positives for PR industry with it, and I think that we ultimately can do a lot more meaningful and better work with it, and so I welcome it as an opportunity for us.

Peter Woolfolk:

Well, let me say that I welcome both of you for having been a guest on our show today, because you've really given me a lot to think about. There are some things that I had not thought about when it comes to the use of AI, and perhaps the same might be for our audience. So I want to say thank you to both of you Michelle Egan up there in Anchorage, Alaska, and Casey Myers, Blacksburg, Virginia, for being guests on our show today. Any closing remarks that maybe you forgot and you'd like to make now?

Michelle Egan:

Oh, I just want to say thank you, peter. This has been fantastic and I'm really glad to be able to engage with your listeners in this way.

Cayce Myers:

Yes, thank you, peter. Happy to talk and really enjoyed our conversation.

Peter Woolfolk:

Well, let me say thank you, because I think that you have brought some information that perhaps a lot of our listeners might not have been aware of. It has enlightened them quite a bit. I certainly learned a bit myself. So I want to say, as I said once before, thank you so much to Michelle Casey for being our guest today, and to my listeners. Certainly, if you've enjoyed the show. We'd like to get a review from you and, of course, let me say that we have a brand new spiffy newsletter. You can get directly to it at wwwpublicrelationsreviewpodcastcom. And also always let your friends know that you were listening and please join us for the next edition of the Public Relations Review Podcast.

Announcer:

This podcast is produced by Communication Strategies, an award-winning public relations and public affairs firm headquartered in Nashville, Tennessee. Thank you for joining us.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Public Relations Review Podcast Artwork

Public Relations Review Podcast

Peter C Woolfolk, Producer & Host