Algorithms have the potential to amplify and spread extremism, thus, a multipronged approach combined with technology, innovations, and regulatory checks is pertinent.
Algorithms are quickly becoming a keystone of content distribution and user engagement on social media. While these systems are designed to enhance the user’s experience and engagement, they often unintentionally amplify extremist propaganda and polarising narratives. This amplification can exacerbate societal divisions, promote disinformation, and bolster the influence of extremist groups. This is called “algorithmic radicalisation”, which shows how social media platforms coax users into ideological rabbit holes and form their opinions through a discriminating content curation model.
This article explores the mechanisms behind algorithmic amplification, its impact on the amplification of extremist narratives, and the challenges in countering extremism.
Understanding algorithmic amplification
Social media algorithms are computerised rules that examine user behaviour and rank content based on interactive metrics such as likes, comments, shares, and timelines. They also use machine learning models to make customised recommendations. This process also works as an amplifier because posts with higher engagement or shares quickly tend to gain popularity, and viral trends sometimes emerge in this process. Algorithms may also create echo chambers if they only show similar viewpoints repeatedly to keep users engaged.
Algorithms and hashtags amplify content by targeting specific audiences and promoting posts aligned with the user’s interests and behaviours.
Hashtags are another vital factor. They serve as keywords to classify content, making it discoverable to a broader audience. When a hashtag is used, it assists the algorithm in recognising the topic of a post and linking it to users searching for or following that hashtag. Posts with trending or niche-specific hashtags are prioritised, with high engagement boosting their visibility further. Algorithms and hashtags amplify content by targeting specific audiences and promoting posts aligned with the user’s interests and behaviours.
The algorithmic echo of propaganda and extremism
Algorithms are at the core of social media platforms like YouTube, TikTok, Facebook, X (formerly known as Twitter) and Instagram, and they modify what is sent to users as per their digital interactions, behaviours, preferences and engagement. Algorithms usually promote emotionally provocative or controversial material by focusing on metrics such as likes and shares, creating feedback loops that amplify polarising narratives. In one of his studies, academic Joe Burton indicated that such algorithmic biases could heighten engagement through fear, anger or outrage, inadvertently giving rise to extremist ideologies and making users vulnerable to radical content.
The two extremist groups that have effectively utilised these platforms to spread propaganda and recruit members are the Islamic State (IS) and al-Qaeda. For example, IS uses X and Telegram to foster a sense of belonging among its followers, often publishing emotionally provocative content aimed at radicalising people. Meanwhile, al-Qaeda uses YouTube to deliver speeches and training with encrypted links in their videos. TikTok has been utilised on the other end of the spectrum by far-right-wing elements. TikTok’s “For You” page frequently recommends far-right-wing material to users, drawing them into algorithmic rabbit holes that amplify extremist ideologies.
Algorithms usually promote emotionally provocative or controversial material by focusing on metrics such as likes and shares, creating feedback loops that amplify polarising narratives.
Algorithmic exploitation does not exist only within the realm of terrorism. It is also used for disinformation during elections, sometimes resulting in violence and polarisation. Such examples underline how algorithms, by prioritising engagement over accuracy, facilitate the spread of disinformation, polarisation, and extremist narratives, making them pivotal tools in modern cyber and ideological warfare. Extremist strategies often align with how algorithms optimise engagement by pushing emotionally charged content. Creating ”filter bubbles”, algorithms expose users to ideologies matching their biases, reinforcing extremist beliefs.
Mitigating risks: Tech solutions and policy steps
Given their opacity, algorithms in social media present challenges that need to be addressed when it comes to the presence of extremist content. They work as “black boxes” in which even developers do not understand the underlying processes for recommending certain content. For instance, TikTok’s “For You” page has been flagged for sensational and extremist material, but its operational mechanics are limiting the mitigation of algorithmic bias. This is what the extremist groups exploit—they change their content to euphemisms or symbols to evade detection systems. Moreover, algorithms’ global disposition without adaptation to local sociocultural contexts worsens the problem.
Balancing free speech and effective content moderation is a complex issue. Policies such as the Netz law in Germany, which aims to curtail online hate speech, force platforms to remove harmful content within tight deadlines. Extremist groups find the weaknesses in these balancing acts by preparing content in ways that just edge within legal boundaries, allowing them to continue the dissemination of divisive ideologies.
The algorithmic amplification of extremism has been minimised through Artificial Intelligence (AI)-driven moderation, such as YouTube’s machine-learning model 2023, which reduced flagged extremist videos by 30 percent. Nonetheless, coded language and satire have been used to avoid detection, mainly by IS and al-Qaeda. Counter-narrative strategies, such as Instagram redirecting searches to tolerance-promoting content, offer constructive alternatives.
The algorithmic amplification of extremism has been minimised through Artificial Intelligence (AI)-driven moderation, such as YouTube’s machine-learning model 2023, which reduced flagged extremist videos by 30 percent.
Initiatives by India’s Ministry of Electronics and Information Technology (MeitY) have flagged over 9,845 URLs hosting harmful content. Under the IT Rule 2021, the Government of India (GOI) regulates social media, digital news, and Over-the-top (OTT) platforms. These rules enable tracing the first originator of content and removing flagged content within 36 hours. Therefore, continued innovation and cooperation remain essential to combat extremism effectively.
Tackling algorithmic radicalisation
In addition to the steps mentioned above, more can be done to facilitate algorithmic transparency and check the spread of extremist content:
The government must conduct public awareness drives to help users identify propaganda and avoid engaging with extremist content.
Conclusion
The algorithmic amplification of propaganda and extremist narratives has arisen as one of the crucial challenges in the digital age, with profound implications for social cohesion, political stability and public safety. A multipronged approach combined with technology, innovations, and regulatory checks is pertinent. While YouTube, TikTok, and Facebook have been refining their algorithms, these efforts continue to be mired with challenges related to transparency, cultural sensitivity, and maintaining the right balance between moderation and free speech. Achieving this balance will require the combined efforts of governments, tech companies, civil society, and users. The government must conduct public awareness drives to help users identify propaganda and avoid engaging with extremist content. For example, the UK's Online Safety Bill contains provisions for public education initiatives to improve online media literacy.
Therefore, it is plausible that the risks and challenges posed by the algorithmic spread of extremism could be mitigated through the joint efforts of a public-private partnership.
Soumya Awasthi is a Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation.
To see more documents/articles regarding this group/organization/subject click here.