Language-Policing: What if AI controls language?

The internet can be a toxic place. It provides a sense of anonymity and a platform in which people can express themselves freely. With much of society now communicating online, problems of online abuse and hate speech have significantly increased.

In response, language-filtering artificial intelligence programs are popping up. Brazilian telecom giant TIM launched Teclaso Consciente (or ‘Conscious Keyboard’). Users are nudged when they type LGBTQIA+ or racially offensive language. An explanation is provided as to why a certain word is offensive and an alternative word is offered. Like a socially-conscious thesaurus.

Language-filtering has also been introduced in the gaming world by Intel. The new program, accurately named ‘Bleep’ allows users to bleep out abusive language in their in-game voice chats. Users are in control of how much toxic language is filtered out and under which categories, including misogyny, white nationalism and the N-word.

Though these examples are relatively minor, the digital world could see this technology introduced on larger platforms. Or become integrated in programs without the option to opt out. The challenge of these programmes is to accurately navigate the tug-of-war of online communication: encouraging free speech while suppressing hate speech. Some believe language-policing programmes are just another tool for companies to misuse. Others encourage companies to take a stand on issues like hate-speech and set an example. The question is, should companies and brands be involved in this conversation at all? Is it the responsibility of brands to police the language of its customers or should they take a leading role in protecting them from hate speech online?