27 September 2023

Learning from tech: AI trains counsellors to deal with teens in crisis

Start the conversation

Abby Ohlheiser and Karen Hao* say the Trevor Project, America’s hotline for LGBT youth, is turning to a GPT-2-powered chatbot to help troubled teenagers—but it’s setting strict limits.


Counsellors volunteering at the Trevor Project need to be prepared for their first conversation with an LGBTQ teen who may be thinking about suicide.

So first, they practice. One of the ways they do it is by talking to fictional personas like “Riley,” a 16-year-old from North Carolina who is feeling a bit down and depressed.

With a team member playing Riley’s part, trainees can drill into what’s happening: they can uncover that the teen is anxious about coming out to family, recently told friends and it didn’t go well, and has experienced suicidal thoughts before, if not at the moment.

Now, though, Riley isn’t being played by a Trevor Project employee but is instead being powered by AI.

Just like the original persona, this version of Riley—trained on thousands of past transcripts of role-plays between counsellors and the organisation’s staff—still needs to be coaxed a bit to open up, laying out a situation that can test what trainees have learned about the best ways to help LGBTQ teens.

Counsellors aren’t supposed to pressure Riley to come out. The goal, instead, is to validate Riley’s feelings and, if needed, help develop a plan for staying safe.

Crisis hotlines and chat services make them a fundamental promise: reach out, and we’ll connect you with a real human who can help.

But the need can outpace the capacity of even the most successful services.

The Trevor Project believes that 1.8 million LGBTQ youth in America seriously consider suicide each year.

The existing 600 counsellors for its chat-based services can’t handle that need.

That’s why the group—like an increasing number of mental health organisations—turned to AI-powered tools to help meet demand.

It’s a development that makes a lot of sense, while simultaneously raising questions about how well current AI technology can perform in situations where the lives of vulnerable people are at stake.

Taking risks—and assessing them

The Trevor Project believes it understands this balance—and stresses what Riley doesn’t do.

“We didn’t set out to and are not setting out to design an AI system that will take the place of a counsellor, or that will directly interact with a person who might be in crisis,” says Dan Fichter, the organisation’s head of AI and engineering.

This human connection is important in all mental health services, but it might be especially important for the people the Trevor Project serves.

According to the organisation’s own research in 2019, LGBTQ youth with at least one accepting adult in their life were 40 per cent less likely to report a suicide attempt in the previous year.

The AI-powered training role-play, called the crisis contact simulator and supported by money and engineering help from Google, is the second project the organisation has developed this way: it also uses a machine-learning algorithm to help determine who’s at highest risk of danger.

(It trailed several other approaches, including many that didn’t use AI, but the algorithm simply gave the most accurate predictions for who was experiencing the most urgent need.)

Top of Form

Bottom of Form

AI-powered risk assessment isn’t new to suicide prevention services: the Department of Veterans Affairs also uses machine learning to identify at-risk veterans in its clinical practices, as the New York Times reported late last year.

Opinions vary on the usefulness, accuracy, and risk of using AI in this way.

In specific environments, AI can be more accurate than humans in assessing people’s suicide risk, argues Thomas Joiner, a psychology professor at Florida State University who studies suicidal behaviour.

In the real world, with more variables, AI seems to perform about as well as humans.

What it can do, however, is assess more people at a faster rate.

Thus, it’s best used to help human counsellors, not replace them.

The Trevor Project still relies on humans to perform full risk assessments on young people who use its services. And after counsellors finish their role-plays with Riley, those transcripts are reviewed by a human.

How the system works

The crisis contact simulator was developed because doing role-plays takes up a lot of staff time and is limited to normal working hours, even though a majority of counsellors plan on volunteering during night and weekend shifts.

But even if the aim was to train more counsellors faster, and better accommodate volunteer schedules, efficiency wasn’t the only ambition.

The developers still wanted the role-play to feel natural, and for the chatbot to nimbly adapt to a volunteers’ mistakes.

Natural-language-processing algorithms, which had recently gotten really good at mimicking human conversations, seemed like a good fit for the challenge.

After testing several options, the Trevor Project settled on OpenAI’s GPT-2 algorithm.

The AI is the largest language model ever created and can generate amazing human-like text on demand but won’t bring us closer to true intelligence.

The chatbot uses GPT-2 for its baseline conversational abilities.

That model is trained on 45 million pages from the web, which teaches it the basic structure and grammar of the English language.

The Trevor Project then trained it further on all the transcripts of previous Riley role-play conversations, which gave the bot the materials it needed to mimic the persona.

Throughout the development process, the team was surprised by how well the chatbot performed.

There is no database storing details of Riley’s bio, yet the chatbot stayed consistent because every transcript reflects the same storyline.

But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas.

More than one chatbot has been led disastrously astray this way, the most recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student.

After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.

The Trevor Project is aware of this and designed ways to limit the potential for trouble.

While Lee Luda was meant to converse with users about anything, Riley is very narrowly focused.

Volunteers won’t deviate too far from the conversations it has been trained on, which minimizes the chances of unpredictable behaviour.

This also makes it easier to comprehensively test the chatbot, which the Trevor Project says it is doing. “These use cases that are highly specialised and well-defined, and designed inclusively, don’t pose a very high risk,” says Nenad Tomasev, a researcher at DeepMind.

Human to human

This isn’t the first time the mental health field has tried to tap into AI’s potential to provide inclusive, ethical assistance without hurting the people it’s designed to help.

Researchers have developed promising ways of detecting depression from a combination of visual and auditory signals.

Therapy “bots,” while not equivalent to a human professional, are being pitched as alternatives for those who can’t access a therapist or are uncomfortable confiding in a person.

Each of these developments, and others like it, require thinking about how much agency AI tools should have when it comes to treating vulnerable people.

And the consensus seems to be that at this point the technology isn’t really suited to replacing human help.

Still, Joiner, the psychology professor, says this could change over time.

While replacing human counsellors with AI copies is currently a bad idea, “that doesn’t mean that it’s a constraint that’s permanent,” he says. People, “have artificial friendships and relationships” with AI services already.

As long as people aren’t being tricked into thinking they are having a discussion with a human when they are talking to an AI, he says, it could be a possibility down the line.

In the meantime, Riley will never face the youths who actually text in to the Trevor Project: it will only ever serve as a training tool for volunteers.

“The human-to-human connection between our counsellors and the people who reach out to us is essential to everything that we do,” says Kendra Gaunt, the group’s data and AI product lead.

“I think that makes us really unique, and something that I don’t think any of us want to replace or change.”

*Abby Ohlheiser is a senior editor at MIT Technology Review focused on internet culture. Karen Hao is the senior AI reporter at MIT Technology Review, covering the field’s cutting-edge research and its impacts on society.

This article first appeared at technologyreview.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.