AI programmers are developing bots that can identify digital bullying and sexual harassment.
Artificial Intelligence programmers at Chicago-based NexLP are developing #MeToo Bots, which will be able to detect inappropriate communication between colleagues and report it for checking in order to deter digital bullying and sexual harassment.
The company’s AI platform is already being used by more than 50 corporate clients, including law firms in London. However, according to a recent article in The Guardian, the adjustments required to implement #MeToo bots will take some time.
Meanwhile, another AI startup, Spot, has created a chatbot that allows employees to anonymously report sexual harassment allegations by giving advice and asking sensitive questions to further investigate alleged physical or digital harassment.
Spot aims to account for gaps in HR teams’ abilities to deal with such issues sensitively, while also preserving anonymity.
Which behaviours are to be considered as red flags by NexLP's #MeToo bots is not yet clear because it will be difficult to introduce a set of rules that will be applicable to different organisational cultures. Also, nuances in language will need to be learned by bots and this will not be a quick process.
“I wasn’t aware of all the forms of harassment,” says NexLP chief executive Jay Leib in the article. “I thought it was just talking dirty. It comes in so many different ways. It might be 15 messages... it could be racy photos.”
He added that if employees’ correspondence gets flagged, it can create a climate of distrust, and offenders may learn how to trick the software. Alternatively, offenders could resort to other mediums of communication that are not monitored by the bots.