Download & subscribe

Never miss a HealthRedesigned episode. Download and stream at your leisure via:


Updates via email

We'll email once a month with a roundup of the latest episodes.

A social network with mental health in mind

with Robert Morris, Co-founder at Koko

Subscribe via email (once a month)

More and more people are going on social networks to share their problems and get emotional support.

At best, they get a little encouragement from their peers or strangers on the internet. However, it’s not unusual for posts like these to receive backlash or become ignored, creating even more emotional distress for the person who’s already feeling down.

But what if this was never the case? In this episode of HealthRedesigned, we chat with Robert Morris, Co-founder of Koko, an AI bot that crowdsources words of encouragement for anyone going through an emotionally difficult time. Through Koko, people can get advice and support anonymously to help them cope with their feelings and improve their mental wellbeing.

A safety net for social networks

How does Koko work?

We started as a downloadable app, but in the course of building it out more, we realised that more people were going on social networks to publicly disclose their emotional vulnerability, anxiety, stress, depression, self-harm and suicidal ideation.

They were hacking the platforms they use every day to get emotional support with mixed results. At worst, they were bullied and preyed upon, but more commonly they were ignored. So, we thought if a good user design principle is to meet people where they are, maybe we should build our system in a way that could be used directly on social networks. And that’s what we started doing.

We worked with Kik, a messaging platform primarily used by teens and we built something on Twitter. We modified our platform so that users wouldn’t have to leave the platforms they were using and converted everything to a text-based interface. We then became a referral partner for many of the large social networks like Tumblr and Pinterest.

What we do now is a lot of crisis triage by sending the majority of our users to the peer support service that we started with. But a lot of our work now is trying to quickly and effectively identify users who are in acute distress and are expressing suicidal thoughts. We route them to crisis lifelines in a way that maximises the likelihood they might use these services.

What does the Koko interface look like?

It can be used on Twitter by direct messaging our account and on Kik and Telegram just as you would another contact in your app, while users on Tumblr get routed to a web-based version of our service. Once they get started, users are immediately greeted by a chatbot that welcomes them to the service, onboards them, orients them to what’s going on and asks them what’s bothering them. As quickly as possible, the chatbot gets out of the way and either sends the user to a lifeline or acts as an intermediary between peers on the network by connecting someone seeking help with others who are looking to help.

Increasing happiness by helping others

You’ve done a lot of research on how helping others will actually help yourself. Could you tell us more?

The project actually started as a tool that I used for myself. I would write down what I was struggling with—at the time it was the craziness of being a grad student at MIT. I’d then collect all these bite-sized feedback from anonymous peers who trained to help me in a specific way and was basically crowdsourcing cognitive therapy. I was also using Amazon’s Mechanical Turk to prototype it.

I would pay random workers from Amazon a few cents to read about my problems and then craft a response. Some tried to look for thought distortions in what I was saying, others tried to reframe it in a more positive light and some just validated and empathised with me. I found that a lot of these workers learned insights about themselves as they commented on the tasks. Some even remarked that they would do it for free—I’d never seen anything like that. When we ran our clinical trial, we had a leap of faith that people would be willing to help each other for free and that actually happened.

“When we collected feedback, most users said the most beneficial part of the platform was not getting help, but helping others.”

They would see post after post where they would have to help someone think more hopefully or optimistically about things and take a balanced view of a stressful situation. By doing this over time, it became a muscle or motor reflex for them in their own lives. So when they were stressing out, they would think back to their persona as a helper on Koko and direct that energy towards themselves in a way that they hadn’t previously. We had these insights coming from the users themselves and then we ran a study and found that indeed people who helped others the most, were the ones who got the most significant gains in terms of reduced depression symptoms and increased wellbeing.

Managing interactions and responses

How do you ensure users are responding to messages in a suitable and empathetic way?

We made two mistakes when we first approached this problem. The first was we were too rigid in our definition of what people should do. A lot of our users felt like they weren’t able to be as creative or free in how they might naturally respond, but if we gave them too much leniency, they would just provide knee-jerk responses that may not be helpful.

The second mistake was trying to use explicit instructions and giving users lengthy examples of good and bad responses. We still do a tiny version of that, but we found that the best way to sculpt and nudge user interactions is through other design approaches.

When you create a post on our network, we’ll ask you to describe a negative thought you’re having and give you some pre-filled options to select like, “I’m a loser,” or “People are terrible,” or “This will never get better.” These are very common negative thoughts and are some of the deep roots to why people feel anxious about something.

By having posters select these, respondents can latch onto them and naturally refute these negative statements. So if someone says, “I’m a loser,” the respondent might naturally want to provide evidence that the poster isn’t a loser.

At the moment, responders can also select pre-filled options to kickstart their response and help them get in an empathy frame of mind. Some of these include “I hear you,” or “I’ve been there” or “I understand”. But people respond in many different ways and we want to be open to people using techniques and strategies we hadn’t anticipated or doesn’t necessarily appear in the clinical literature, which we’ve seen as well.

How does Koko identify and manage crisis situations where someone is expressing a desire to commit suicide or harm themselves?

We get an incredible amount of posts every day about people describing an urge to self-harm, feeling suicidal or experiencing severe eating disorders. We’ve built a set of machine learning classifiers to analyse all the text that comes into our network and use this to manage bullying, malicious responses and inappropriate behaviours.

Every piece of interaction under our platform is highly scrutinised—that’s partly why the interactions are very constrained. Users do not have long back and forth private chats. At most, there are three short messages exchanged so we can supervise what’s going on.

There are a lot of back-channel chats and it might be hard to detect some of these users in crisis. But we’ve trained a set of neural net classifiers using hundreds of thousands of data points and labels to detect whenever the text suggests a person might be in danger of harming themselves.

The first line of defence for us are these classifiers and if it’s 99.5% confident, we go ahead and let it decide whether or not someone needs resources. If it’s less confident, we have a team of human moderators that work 24/7 to help decide. But if a user posts about self-harm and our algorithm picks it up, our responsibility now is to get as much information as possible from them and route them as quickly as possible to the resources available in their country.

Offering a lifeline, online

Can you share an example of how Koko has benefited somebody?

I met a lot of wonderful people through user interviews and one girl in particular, stood out for me. She started a Tumblr blog about plush toys and stuffed animals—there’s a whole scene around people making these things by hand.

She was really into it and had this huge persona online about her plush obsession. At some point, that stopped being cool to her friends and they started to shun her. She began understandably feeling depressed and her Tumblr blog reflected that. She started re-blogging posts from depression blogs and the colour palette of her blog started to get more grey and dark over time.

Eventually, she was on Tumblr searching for a suicide note and that yielded a link to our service. She went on Koko and described what was bothering her and got some responses that were really lovely. She took screenshots of them to return to over time and think about whenever she felt bad. But then she started responding to other people and became one of our more active users and started telling all her friends about it. She now has this new identity where she’s someone who helps people on our service.

That was a great user story of using this approach to catch people and reach them at that moment they’re opening up online. We wouldn’t have found her any other way, so it was really cool to find a way to get her from Tumblr into our service and then for her to have those experiences.

What has been one of your biggest learnings from a design point of view?

One of the biggest things that’s been hard for us has been trying to maintain a balance between clinical principles and listening to users and following their instincts and inclinations. The real challenge for any type of clinical mental health intervention is helping people choose coping strategies that offer longer-term benefits—this is what motivated my work six years ago. Over time, I’ve had to surrender a very dogmatic, literal approach to clinical principles where I take what we know works in the clinic and transform that identically into an online, digital user experience.

“I think the biggest design lesson I've learned is you can't just stamp something that works well offline, online.”

You have to define this balance between what users want at that moment and what might be good for them. From our perspective, users don’t want to immediately start learning cognitive therapy skills—our younger audience especially, they want to feel less alone. That’s their primary need and we need to meet that in a way that also exposes them to the skills that we think will be helpful. I’ll be the first to say that we’re not doing a perfect job.

This is something we have to continually work on and we often zigzag between being too clinically heavy and then too light and I’m not sure what that middle ground is. But I think wandering around that path has been the most educational for me as a designer.

Connect with Koko on: