Facebook builds AI to understand memes

Donna Miller
September 14, 2018

Memes have become a great source to entertain people, convey a message or any other goal on the social media platform.

After extracting the text, it uses its text recognition model that has been trained to understand the context of the text and the image together, to figure out if the particular video or meme is offensive in nature. But, many pages are misusing the memes in an offensive way that is not appropriate.

But as Facebook writes in a blog post about Rosetta, the new system is also able to do a better job than was previously possible in understanding harmful or false text used in memes that spread across Facebook and Instagram. For the second part, a text recognition system recognises the text in the detected regions and transcribes it into something that can be read by machines. In conclusion, the company says it is also working on extending the text recognition system to other languages that don't feature the Latin alphabet data set.

By its own admission, Facebook has been struggling to suppress the spread of inappropriate content - from hate speech and threats of violence to disinformation and "fake news" across its huge platform. The system is capable of processing over a billion images a day.

Saladin Ahmed and Javier Garrón Join Forces on Miles Morales: Spider-Man
It's the finest Spider-Man game ever, and a mighty fun open-world action adventure to boot. I felt a bit like Spider-Man himself when I was offered the series.

Over the last few years, hate speech has emerged as one of the biggest problems faced by the social media platforms.

To handle this monumental task, the company has built a sophisticated artificial intelligence which is known by the name as Rosetta. For now, Facebook is employing Rosetta's help to make its photo searches more relevant, as well as automatically detect content that violates its hate-speech policy. The AI tool will be effective enough to understand multiple languages. This new tool will avoid posts that offend people and will reduce Facebook moderator's work of reviewing posts that are reported by people. There are simply too many users and too many posts to lay content moderation exclusively on the shoulders of human workers.

For over a year now, Facebook has been under the scanner for the role its played in providing a platform for fake news and hate speech.

Other reports by

Discuss This Article