Smart Cities

How Social Media AI Can Recognize and Remove Toxic Speech

SHARE.

How Social Media AI Can Recognize and Remove Toxic Speech

Nearly everyone experiences negative comments or unpleasant interactions on social media from time to time. Unfortunately, extreme forms of toxic speech, such as blatant racism or sexism, are becoming more prevalent on the various social media platforms. Social media companies are increasingly using artificial intelligence (AI) to combat this growing problem. There are several reasons, however, why AI still struggles to recognize toxic speech on social media. There are also new technologies that can enable social media AI to recognize and remove toxic speech before users can even interact with it.

 

How Prevalent is Toxic Speech on Social Media?

Axios reports that hate speech among teens went up in several areas from 2018 to 2020. The largest increase was racist comments with 23 percent of 14 to 17-year-olds stating that they often saw racist comments in 2020 on social media. Teen aren’t the only ones struggling with toxic speech and the problems it can cause on various social media platforms.Hundreds of millions of individuals are posting comments, articles, and memes on social media every day. Even with the use of advancing technology, recognizing and removing toxic speech is an overwhelming task. According to USA Today, 53 percent of Americans say they experience some type of harassment or hate while on social media.Facebook and other social platforms are working to combat the problem. According to Nextgov, social media AI was key to removing 27 million segments of hate speech during the last three months in 2020. These types of statistics make obvious how much of a problem toxic speech is on the various social media platforms and how much more needs done to combat the problem.

socialmediaAI, toxicspeechsocialmedia, hatespeechsocialmedia, hatespeechAI, artificialintelligencetoxicspeech

How Does Social Media AI Define Toxic Speech?

Hate speech is ultimately subjective to many people. What one individual considers offensive is often not upsetting to another. This is one of the main reasons recognizing toxic speech is so difficult. Ultimately, executives on various social media platforms will need to make decisions regarding what is toxic speech and what isn’t. There is a great balancing act between First Amendment free speech rights and providing a safe platform for all users.Enforcing objective policies is the path that tech companies must attempt to take. Implementing the most advanced AI techniques can help limit toxic speech in an effective and efficient manner. The first step is to define what AI will determine is hateful, racist, sexist, or offensive on a social media platform. Using AI, Big Tech needs to consistently apply their standards to all people and groups, regardless of political affiliation.Even if you can keep up with training social media AI and the deep learning algorithms with real-time information there are other challenges to consider. Analytics Insights state that hate speech varies across the world because what is toxic often varies in different cultures. Even within the same country, cultures are continually changing. Social media platforms will need to remain vigilant when determining what people view as acceptable and unacceptable language. This will enable them to design social media AI systems that can accurately and consistently recognize and remove toxic speech.

 

How Does Social Media AI Recognize Toxic Speech?

The algorithms in use on the internet are working in ways that sometimes conflate hateful comments with non-hateful comments because they contain similar words. Any words that relate to gender, religion, disability, or sexual orientation may illicit a red flag even if they are not written or spoken in a derogatory manner. When implementing social media AI it’s important to understand that automated systems score very high in technical tests. This means the artificial intelligence is accurate when recognizing and evaluating language. They are, however, not always accurate in real-life situations.Researchers are developing algorithms to filter out what people see as inconsistencies and misunderstandings. This leaves what most people agree is toxic speech. What type of language the majority is labeling as hate speech is what the researchers begin to focus on. They call “primary labels” what most individuals agree upon when tagging potentially toxic or hateful content. These data sets of information are in use to train artificial intelligence models so the social media AI can more easily target misinformation and toxic language.However, it’s much more difficult to evaluate misinformation, harassment, or hate speech in different types of contexts. Current algorithms may even cause racial bias. Vox reports that when training social media AI to identify hate speech it may in fact sometimes amplify racial bias. AI systems are sometimes more likely to flag tweets as toxic that are in African American English.

socialmediaAI, toxicspeechsocialmedia, hatespeechsocialmedia, hatespeechAI, artificialintelligencetoxicspeech

What New Systems are Now in Place?

Artificial intelligence technology uses advanced algorithms that can recognize and analyze natural language. AI does this much faster than humans are capable of doing. Scientific American describes some of the systems and processes that can recognize hate speech online. Google’s Jigsaw system is a collaborative project that works to detect hateful speech on the internet.Google, working in conjunction with Jigsaw, is now using an AI-powered system that works to moderate content. Perspective is the system that is processing approximately 500 million requests each day. Perspective works by scoring content in relation to previous content that was toxic. Users can decide what kind of content they want to see in regards to how toxic it is on a continuum.The goal of social media platforms is to remove toxic speech immediately, before someone even has a chance to report it. This is important because hate speech online can lead to self-destructive behaviors such as eating disorders or even suicide. There is also a connection between toxic speech and violent behavior. The Council on Foreign Relations reports several incidents of violence that relate to toxic speech on various internet platformsResearch Outreach states that there were over 4 billion people on the internet in 2018. That was more than half of all people living on the planet at that time. This means that the challenge of removing toxic speech from social media will continue. Social media AI will continue to develop and advance in the efforts to keep internet platforms safe while reducing the amount of toxic speech.

SHARE.

Previous Article

Webinar Recap: How Computer Vision Drives Digital Transformation

Next Article

The Future of the Waiting Room: How Telemedicine and Mobile Health Could Change It

Related Posts