After the UEFA Cup defeat to Italy, the British team’s black players faced an online banana attack. Big techs don’t know what to make of it:
Emojis appeared like a stumbling block. When Apple released emojis with different skin tones in 2015, the tech giant was criticized for allowing racist comments. A year later, the Indonesian government filed complaints after demanding that social media remove LGBTQ-related emojis.
Some emojis, including the one depicting a money bag, have been associated with anti-Semitism. Black football players have been a frequent target: The Professional Footballers Association and the data science company Signify conducted a study last year on racially abusive tweets directed at players and found that 29% included some form of emoji.
Over the past decade, the nearly 3,000 pictograms that make up the emoji language have been a vital part of online communication. Today it’s hard to imagine a text message conversation without them.
The ambiguity that is part of its charm is not without its problems, however. A blinking face can indicate a joke or a flirtation. Courts end up debating issues such as whether it is considered a threat to send someone an emoji from a gun.
This subject is confusing for flesh-and-blood lawyers, but it is even more confusing for computer-based language models. Some of these algorithms are trained on databases that contain few emojis, says Hannah Rose Kirk, a Ph.D. researcher at the Oxford Internet Institute. These models treat emojis as new characters, which means that algorithms must start from scratch in analyzing their meaning based on context.
“This is a new emerging trend, so people aren’t very aware of it and the models are lagging behind humans,” says Lucy Vasserman, who is an engineering manager for a team at Google’s Jigsaw, which develops algorithms for signaling abusive speeches online.
What matters is “how often they appear in the testing and training data”. His team is working on two new projects that can improve emoji analysis, one that involves mining large amounts of data to understand language trends and another that takes uncertainty into account.
In a football match at Liverpool’s Goodison Park in 1988, player John Barnes stepped back from his position and used his heel to kick a banana that had been thrown in his direction. Captured in an iconic photo, the moment summed up the racial abuse facing black football players in the UK.
Want to invest in the stock market and don’t know how? Learn everything with EXAME Academy
More than 30 years later, the media has changed, but racism persists: after England lost to Italy in July in the UEFA European Championship final, the British team’s black players faced an onslaught of bananas.
Instead of physical fruits, emojis were added to their social media profiles, along with monkeys and other images. “The impact was as deep and significant as the real one,” says Simone Pound, head of equality, diversity, and inclusion at the Professional Footballers Association UK.
Facebook and Twitter have faced much criticism for taking too long to filter the wave of racist abuses during this summer’s European Championships. The moment highlighted a long-standing problem: despite spending years developing algorithms to analyze hostile language, social media companies often lack effective strategies to stop the spread of hate speech, misinformation, and other problematic content on their platforms.
Technology companies cited the technical complexity to cover up more straightforward solutions to many of the most common abuses, according to critics. “Most of the usage is unambiguous,” says Matthew Williams, director of HateLab at Cardiff University. “We need not just better AI, but bigger and better moderation teams.”
The use of emojis has not been analyzed in terms of their importance to modern online communication, says Kirk. She found her own way to study pictograms after previous work with memes. “What we found really intriguing as researchers were: why can’t Twitter, Instagram, and Google’s solutions curb emoji-based hatred?” she says.
Frustrated by the poor performance of existing algorithms in detecting the threatening use of emojis, Kirk built his own model, using humans to help teach algorithms to understand emojis, rather than letting the software learn on its own.
The result, she says, was much more accurate than the original algorithms developed by Jigsaw and other academics her team had tested. “We’ve demonstrated, with relatively low effort and relatively few examples, that it’s possible to teach emoji very effectively,” she says.
Blending humans with technology, as well as simplifying the approach to moderating speech, has also been a winning formula for startup Respondology in Boulder, Colorado, which offers its screening tools in Nascar, NBA, and NFL. It works with the Detroit Pistons, Denver Broncos, and major English football teams.
Rather than relying on a complicated algorithm, the company allows teams to hide comments that include certain phrases and emojis with a cover screen. “Every customer that comes to us, particularly sports customers – leagues, teams, clubs, athletes – all want to know about emojis in the first conversation,” said Erik Swain, president of Respondology. “You hardly need any AI training for your software to do this.”
Facebook acknowledges that it incorrectly informed users that the use of certain emojis during the UEFA European Championship this summer did not violate its policies, when in fact it did. It’s been said that the platform has started to automatically block certain emoji sequences associated with abusive speech and also allow users to specify which emojis they don’t want to see. Twitter said in a statement that its rules against abusive posting include hateful images and emojis.
These actions may not be enough to calm critics. Professional athletes talking openly about the racist abuse they face has become yet another factor in the broader march toward possible government regulation of social media.
“We all have concerns and regrets, but they haven’t done anything, that’s why we have to legislate,” said Damian Collins, a UK Member of Parliament who leads work on an online security bill. “If people with an interest in generating harmful content can see that platforms are particularly ineffective at detecting emoji use, we will see more and more emojis being used in that context.” — With Adeola Eribake