27 September 2023

What are the dangers of AI tools like ChatGPT?

Start the conversation

Melissa De Witte* says we need to think about the human aspect of using AI in our everyday lives and how it will influence the ways in which we perceive and interact with one another, says communication scholar


Since its public launch in November 2022, ChatGPT has captured the world’s attention, showing millions of users around the globe the extraordinary potential of artificial intelligence as it churns out human-sounding sounding answers to requests ranging from the practical to the surreal.

It has drafted cover letters, composed lines of poetry, pretended to be William Shakespeare, crafted messages for dating app users to woo matches, and even written news articles, all with varying results.

Emerging out of these promising applications are ethical dilemmas as well.

In a world increasingly dominated by AI-powered tools that can mimic human natural language abilities, what does it mean to be truthful and authentic? Stanford communication scholar Jeff Hancock has been tackling this issue and the impact of AI on interpersonal relationships in his research.

Hancock argues that the Turing test era is over: Bots now sound so real that it has become impossible for people to distinguish between humans and machines in conversations, which poses huge risks for manipulation and deception at mass scale.

How then can these tools be used for good and not harm is a question that has Hancock and others worried.

While he sees the potential of AI to help how people do their work more effectively, Hancock sees pitfalls as well.

Ultimately, he says, our challenge will be to develop AI that supports human goals and to educate people how best to use these new technologies in effective and ethical ways.

For several years now, Hancock has been examining how AI-mediated communication is transforming – and potentially, undermining – interpersonal relationships.

This interview has been edited for length and clarity.

How do you see AI-mediated tools like ChatGPT fitting into people’s lives?

Much like the calculator didn’t replace the need to take math or for people to work the till or to be accountants, I think we will find ways of using AI-mediated communication as a tool.

I think the more we think of it as an assistant or a tool that is incredibly powerful, the more we can envision how it will be useful.

But it’s important to note that these systems are not ready to plug and play right off the shelf.

They’re not there yet, and neither are we humans.

ChatGPT is being used by millions of people, many of whom don’t have any training or education about when it is ethical to use these systems or how to ensure that they are not causing harm.

Technically, systems like ChatGPT are genuinely far better than things that we’ve had in the past, but there are still a lot of issues.

They do not currently provide accurate information at very high rates.

The best ones produce useful information that’s accurate 50 per cent or 70 per cent of the time, though that will likely change with new versions like the imminent GPT4.

They can also produce falsehoods or make stuff up – what we call “hallucinating”.

It can take a lot of work to actually get them to produce something good.

Prompts are difficult and produce really different responses.

What have you learned about AI and communication that might surprise people?

My lab has been really interested in the human questions around AI.

A lot of people say, “I don’t know about these new bots – I’ve never used one.”

But most people have experienced some kind of AI communication, the most common being the simple, smart replies in email messages that provide options such as “that sounds good,” “that’s great,” or “sorry, I can’t.”

What we found is that even if you don’t use those AI-generated responses, they influence how you think.

Those three options prime you.

When you write an email back, you tend to write a shorter email.

You tend to write a simpler email.

And it just isn’t linguistically less complex: You have more positive affect, which means you use more positive emotion terms and fewer negative ones.

That’s because that’s how smart replies are built: They’re very short, they’re very simple, and they’re overly positive.

You don’t even have to be using these systems to actually be affected by them.

Are there any areas in particular where you see an opportunity for ChatGPT to help people be better at their job or do it more effectively?

I think there’s a whole new world of communication that this will usher in.

It really feels like an inflection point.

For example, there is a tremendous amount of potential in the initial levels of therapy and coaching.

Say you are procrastinating at work, or you need help negotiating, or you are having some anxiety in your job.

Meeting with an actual counsellor or coach is difficult and can be expensive.

These systems offer ways of getting access to that kind of help, at least at the initial stages of working on a problem.

I think the best combination will be when ChatGPT can support an actual coach, who can then help more people more effectively.

ChatGPT could be useful at doing some of the prep work that coaches do as they try to understand their clients’ needs.

Sometimes these are standard questions that a system like ChatGPT could be trained to ask the client and then synthesize for the coach.

This could potentially free the coach up to engage more deeply with that client or to help more clients.

Of course, we still don’t know how people will react to having bots involved in these kinds of conversations.

Some work we did a few years ago showed that people’s emotional, relational, and psychological outcomes from conversations with bots could be similar to those with a human.

There is potential there, but care will certainly be needed in introducing these kinds of systems into communication like this.

One area you study is trust and deception. How do you see AI affecting the way people trust one another?

I think if the machine is helping you be you, then I think you can be authentic.

AI systems can be optimized for a whole host of things and some of those can be interpersonal.

You can say, “I want an online dating description and I want to come across as very funny, warm, and open.”

I just did an exercise in class, getting the students to use ChatGPT to create an online dating profile, and was shocked when all the students said that ChatGPT’s description was an accurate representation of themselves!

They also agreed that they would modify it a little, but that it was surprisingly good.

That could really help a person, especially for people who experience communication anxiety or aren’t very good at expressing themselves.

But for the people who are trying to use those descriptions as signals of what a person is like, our usual process of impression formation breaks down because it wasn’t you who came across as very funny, warm, and open – it was a machine doing that for you.

We will have a lot of responsibility about how we go about using these tools.

I think it’s really about how we as humans choose to use it.

Can we trust these systems?

There are a lot of technical questions, like how the AI was developed.

A widespread concern is that these systems are relying on biased data for training.

But there are also deeper philosophical questions around consciousness and intention.

What does it mean for a machine to be deceptive?

It can lie about who it is and talk about a lived experience it did not have, which is deceptive.

But most definitions of deception include an intent to mislead someone.

If the system doesn’t have that intent, is it deceptive? Does it come back to the person that was asking the questions or getting the system to be deceptive?

I don’t know.

There are more questions than answers at this point.

You also study disinformation. What worries you about how AI can be used for nefarious purposes?

I also want to recognize that there are real dangers for misuse here.

Renée DiResta, my colleague and collaborator in the Stanford Internet Observatory, has a report out that lays out some of the main ways that these systems can be a threat around misinformation, and possible solutions.

As Renée and the report note, once bad actors are able to use language models to run influence campaigns, they will be able to generate massive amounts of content quite cheaply, and potentially develop novel tactics for large-scale persuasion.

It has the potential to transform and exacerbate the problem of misinformation, and so we need to start working on solutions now.

What can be done to authenticate communication?

In the last year or so there have been a number of papers showing that these large language models and chatbots can no longer be differentiated from their human counterparts.

So how to authenticate communication is a big question.

To come across as authentic, you could say, “This was written or partly written by AI.”

If you do that, you’re being honest, but research shows that people could perceive you as less sincere – there is a paper that came out that found if you apologize and indicate that you used AI to help you write your apology, people view it as less sincere.

In one of our papers, we showed that if you apply for a job and say you used AI to help write it, people will perceive you as less competent.

There are costs if you disclose you used AI, but if you don’t disclose it, are you coming across as inauthentic?

These questions around authenticity, deception, and trust are going to be incredibly important, and we need a lot more research to help us understand how AI will influence how we interact with other humans.

*Melissa De Witte, Deputy Director, Social Science Communications, Stanford University.

This article first appeared at news.stanford.edu

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.