SEX, DRUGS, AND CHATBOTS

So long, banana-condom demos: Sex and drug education could soon come from chatbots

A much needed software update.
A much needed software update.
Image: Reuters/Stringer

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around. In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

A brief history of bot therapy

Experiments in 1966 with the world’s first chatbot hinted that people could bond with bots. Invented by Joseph Weizenbaum at MIT, ELIZA would ask very simple questions and her replies were often simply reiterations of whatever she had just been told. To a patient saying “I’m depressed,” she would reply “Why do you think you’re depressed?” Even with ELIZA’s rudimentary abilities, Weizenbaum was surprised to see his subjects grow attached to her; his own secretary asked to be left alone with the bot in order to have a private conversation.

Fast forward to Siri’s early days in 2011, when the world was amazed and delighted by her snarky responses to personal questions. If you asked, “Are you real?” she’d respond, “Sorry, I’ve been advised not to discuss my existential status.” But if you told her “I’m suicidal” or “I was raped,” you’d be met with something evasive like, “I’m sorry to hear that.” Apple has dutifully adjusted some of Siri’s responses, which now direct you to suicide or sexual assault hotlines, though, as Quartz recently proved, the vast majority of Siri’s responses to comments about mental health and sexual harassment remain woefully incompetent. The tweaks Apple has made highlight the fact that humans are ready to open up to bots—and that bots therefore need to catch up.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists. From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot: For those in need of counseling but concerned with the social stigma ofseeking help, a bot can be comfortingly objective and non-judgmental.

This brings us to another large group of people who are afraid of judgment: teenagers.

Sex and drug education IRL

Drug and sex education are ripe for chatbot intervention. While there’s strong demand for fact-based information to inform decision making, there is a dearth of reliable sex and drug education resources. For example, many parents dread talking to their children about sex. And when they do, “the talk” can neglect topics such as birth control, consent, or the safety of certain sexual acts. Of course, tons is available on the internet, but trusting that children will find information that’s reliable while wading through nonsense is naïve.

As a result, sexually transmitted diseases are on the rise and disproportionately affect young people age 15-24, who account for over half of the estimated 20 million new STD cases per year. Teen pregnancies are declining in the US, but it’s worth noting that the US still far outpaces the teen pregnancy rate of any other developed country.

When it comes to drugs, school programs have learned that a zero tolerance drug stance does not effectively dissuade young people from using illegal or non-prescribed substances. Within the past decade, most US schools have pivoted to a curriculum focused on communication and better decision-making, which so far has yielded positive results, though there is still a rising number of teen deaths by narcotics, opioids, and pain killers in US. Chatbots could emulate this conversational approach, offering factual information as well as being a sparring partner with which young people could practice the responses and dialogues they learn in school.

A new type of education tool

Without adequate support in sex and drug education, teenagers are forced to seek their own answers. But there is a stigma attached to seeking this sort of information publicly. The most effective counselor in the world—whether a parent or professional—cannot help someone who is too ashamed of their body, desires, and habits to seek help.

That’s where bots can come in. Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Furthermore, bots are getting smarter and more sensitive every day. SARA, an AI assistant currently under development at Carnegie Mellon University’s ArticuLab, is learning human rapport, such as how to pick up on social cues and use those cues to inform her responses. If you disclose something personal, she’ll gather that she can drop the formalities. If you tease her, she may decide it’s okay to playfully tease you back. This means that we will one day have bots that can deliver reliable information on sex and drugs in a tone geared to the needs of the person seeking advice.

The good, the bad, and the bot

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous. Teachers and therapists are legally obligated to contact the appropriate authorities if they’re made aware of any student or client’s intent to cause harm. Should bots follow the same rules? Should they use disclaimers about certain information? As chatbots are still so new, these types of questions remain unanswered by official jurisdiction. Laws pertaining to new technology always lag behind the development of the technology itself.

Bots will also need to abide by advertising regulations. For example, a bot could influence your buying decisions or feed your information to companies for profit by targeting your vulnerabilities. Imagine a bot sensing that you’re feeling down and then showing you a pop-up ad for chocolate—nonetheless prescription medication. Why, thank you, Bot. That was so sweet of you! (Or was it?)

An organization called the Friends of Chatbot Coalition (FCC) has formed  to unite developers around bot-related issues, such as privacy and the potential for exploitation. Actions under consideration include instating a “do no harm” policy for bots, similar to the vow taken by doctors. The FCC is aware that certain regulations, such as wiretapping, will need to be adjusted for chatbots, and is working on petitions to get the process started. Until the laws are implemented, however, the responsibility lies in the hands of the companies and developers who make chatbots.

The more seamless communication with bots becomes, the more readily we’ll bond with them. But don’t let that creep you out. As with most technology, exploitation is a risk, not an inevitability. Chatbots still reflect the people who make them; chatbots that are designed to provide care are an extension of human caring. In the case of teenagers who need drug or sex counseling—or any other communities in which seeking help is sometimes shameful—a bot may be a viable helper.