Sexist, homophobic, anti-disabled … South Korean chatbot taken offline after derailing

Conversational software available on Facebook Messenger has permeated the nauseating remarks of some Internet users, before being deactivated.

A hugely popular South Korean chatbot, a chatbot that allows internet users to chat with what is said to be a 20-year-old student, was disabled this week after making sexist, homophobic and disrespectful comments about people with disabilities.

Lee Luda, developed by Seoul-based startup Scatter Lab to run through Facebook Messenger, had been hugely successful upon launch in December due to the spontaneity and naturalness of the responses made by artificial intelligence. It had quickly attracted over 750,000 users.

Lee Luda responded using an algorithm developed from data collected from 10 billion conversations on Kakao Talk, the country’s leading messaging app.

But the chatbot was quickly at the heart of controversy because of its hateful responses, to the point that its developers were forced to suspend it on Tuesday.

On the screenshot of a conversation, we can see the virtual student asserting that she “despises” gays and lesbians.

Asked about transgender people, Luda explodes: “You drive me crazy. Don’t repeat that question again. I said I don’t like them.”

In another conversation, she explains that the people behind the #MeToo Movement were “just ignorant”, adding, “I despise them completely.”

She also claims that she “would rather die” than live with a disabled person.

The chatbot taken offline

Scatter Lab apologized for the comments, adding that they did not represent the values ​​of the company.

This is not the first time that chatbots have been derailed. But the embarrassment is compounded by the fact that Lee Luda was operating on the basis of discussions that have existed, and his setbacks could be indicative of some ideas taking root in South Korean society.

Scatter Lab said it worked to prevent these kinds of issues during the six months of testing leading up to the chatbot’s launch.

“Lee Luda is artificial intelligence, like a child learning to have a conversation. She still has a lot to learn,” the company said Tuesday.

“We’re going to educate Luda to exercise judgment on which answers are appropriate, rather than indiscriminate learning,” Scatter Lab continued without saying when the chatbot would be back in service.

This case is reminiscent of the controversy surrounding an artificial intelligence created by Microsoft in 2016, synthesized in a conversation robot on Twitter. After interacting with users, she then disseminated racist and sexist comments, promoting Nazism in the process. It was immediately taken offline by the American company.

LEAVE A REPLY

Please enter your comment!
Please enter your name here