Algorithmic Moderation And Foreign Interference: The Case Of Pro-Sikh Activism On Social Media

Keywords: Algorithmic Bias, Social Media Regulation, Foreign Interference, Pro-Sikh Activism, Content Moderation

Sunny Pabla

4/4/202511 min read

SHORT BIO

Sunny Pabla, a Windsor, Ontario native, holds a Bachelor of Commerce and Masters in Business Administration. He is also a restaurant owner. A dedicated advocate, Sunny is a second-year law student with aspirations in litigation, combining a strong foundation in business and leadership with a history of community involvement and experience across diverse organizations.

Introduction

Social media platforms amplify marginalized voices. However, these platforms are not neutral; their algorithmic systems, shaped by external and state-driven influences, are often manipulated to suppress specific voices under the guise of engagement or neutrality. Algorithmic moderation can amplify biases, marginalize communities, and silence dissent, particularly when state actors manipulate and exploit these systems to stifle activism. Furthermore, the weaponization of artificial intelligence in geopolitical conflicts has emerged as a critical concern, demonstrating how these tools can be leveraged to suppress political discourse and advocacy globally.

In this short piece, I explore the suppression of pro-Sikh activism on social media, influenced by alleged pro-India interference, as a case study of how algorithmic governance can undermine democratic discourse. I examine the intersection of data, artificial intelligence, and the law, highlighting the role of algorithmic bias, foreign interference, and the strategic use of AI in stifling political expression. Arguing that current regulatory frameworks fail to address these challenges, I propose legal solutions to promote transparency, accountability, and equitable governance in digital spaces. These solutions are essential to safeguarding freedom of speech and ensuring inclusive technological progress.

Historical Context and Systemic Challenges – Synopsis

The Sikh community has long experienced systemic challenges, both historically and in the digital age. Allegations of pro-India interference highlight a troubling trend where Sikh activists advocating for human rights and Khalistan, a separatist state, are labeled as extremists. This manipulation of digital platforms highlights broader concerns about state actors exploiting algorithmic systems to suppress dissent and assert control.

Historical events like Operation Blue Star in 1984, when the Indian Army stormed Harmandir Sahib, a Sikh Gurdwara, continue to resonate with the Sikh diaspora. Viewed as an attack on religious identity, this event catalyzed global activism and a renewed focus on Sikh sovereignty. These historical grievances are amplified in the digital age, where platforms suppress dissent while amplifying state-sponsored narratives. Further exacerbating this issue, intelligence reports from the 2021 Canadian federal election indicate that India clandestinely provided financial support to preferred candidates, allegedly to sway narratives in its favor. Such activities exemplify how foreign interference intersects with algorithmic biases, where disinformation campaigns targeting Sikh advocacy are amplified while genuine activism is suppressed.

Digital Censorship, Suppression, and Online Targeting

Since 2020, hundreds of Sikhs have been detained for online activities, such as sharing posts about Khalistan, often based on algorithmically flagged content lacking substantive evidence. Many detainees have faced severe mistreatment, including custodial sexual violence, as exemplified by the case of Nodeep Kaur. Kaur, a young Dalit labor rights activist, was arrested during the Farmers’ Protests and reported being beaten and sexually assaulted in custody. Her case drew international attention to the use of excessive force and systemic targeting of activists, both online and offline.

During the Farmers’ Protests, hashtags like #Sikh and #Sikhism were systematically suppressed, while pro-government narratives were amplified. This pattern of digital suppression aligns with allegations of India’s broader interference tactics, including efforts to influence Canadian electoral outcomes through proxies and disinformation campaigns. Such activities are not confined to Indian borders but extend to platforms and narratives globally, disproportionately affecting the Sikh diaspora. An article from Baaz News by Jasmeen Bassi highlights that "Sikh ground reporters and kisan mazdoor ekta movement affiliated accounts such as @iamparmjit, @sikhsiyasat, @PunYaab, @panth_punjab, @Kisanektamorcha, @Tractor2twitr, and @kisaanivichaar were suspended by Twitter en par with what seems to be a growing pattern of Sikh censorship on social media." These patterns highlight the role of algorithmic bias in marginalizing vulnerable communities.

Transnational Censorship and Suppression

Reports from AP News and Le Monde reveal how India pressured platforms to silence Sikh voices globally. This interference aligns with a broader pattern of censorship and suppression targeting Sikh activists and organizations. Rupi Kaur, a globally recognized poet and activist, criticized Twitter’s role in this suppression during the Farmers’ Protests in India. On January 26, 2021, Kaur highlighted the suspension of accounts documenting the protests, including @SikhSiyasat, @Kisanektamorcha, and @IamParmjit, and questioned Twitter’s complicity in amplifying state-backed disinformation while silencing dissent. These suspensions occurred amid an internet ban in India, further restricting activists' ability to document and engage with ongoing protests.

Recent reports also shed light on how India’s censorship tactics extend beyond its borders. For example, a Canadian Sikh advocacy group; The World Sikh Organization (“WSO”), accused Twitter of censoring its posts at the request of the Indian government. According to the National Post, these takedowns reveal how India's influence over global tech platforms threatens the ability of diaspora communities to freely advocate for human rights and political causes. Similarly, CBC News reported on how Twitter blocked content from Canadian Sikh organizations and public figures, including tweets from Canadian poet and author, Rupi Kaur and the leader of the New Democratic Party of Canada, Jagmeet Singh, in compliance with Indian government requests. The targeted removal of posts, such as those amplifying dissent or criticizing state policies, highlights the extraterritorial reach of state-sponsored censorship. This extends even to nations like Canada, which have stronger protections for free expression. Such actions raise significant concerns about the complicity of platforms in suppressing freedom of expression, even in jurisdictions where these rights are constitutionally enshrined. The 2023 assassination of Canadian Sikh activist Hardeep Singh Nijjar further highlights the extent of state interference and transnational repression.

Algorithmic Bias, Hashtag Censorship, and State Interference

Since 2020, numerous Sikh activists and media accounts have faced similar treatment, with many suspended or censored under the guise of legal compliance. Jas UK Singh, a prominent advocate, received an official notice from Twitter about the withholding of his account in accordance with India’s Information Technology Act, 2000. This exemplifies how platforms enforce local government requests to suppress dissenting voices. Furthermore, hashtags like #FreeJaggiNow and #NeverForget84, which amplify calls for justice and remembrance of significant Sikh historical events, were also censored, illustrating the systematic targeting of Sikh advocacy online.

Kaur’s critique stresses the disproportionate impact of algorithmic moderation and content takedowns on Sikh activists, raising significant concerns about platform accountability and state influence over global tech companies. These actions not only stifle legitimate activism but also amplify broader questions about sovereignty and the urgent need for international governance to address foreign interference in digital spaces.

  1. The Suppression of Hardeep Singh Nijjar’s Advocacy


Nijjar experienced targeted suppression online, where pro-India agents reportedly exploited algorithms to brand his advocacy as extremist. A WSO report states that this tactic reflects a broader trend of criminalizing dissent through
algorithmic bias. Journalist Rana Ayyub has also highlighted how state mechanisms in India systematically label dissenting voices, including activists and journalists, as extremist or anti-national to justify their suppression. Her analysis highlights the broader convergence of algorithmic bias and state-led efforts to marginalize advocacy and stifle political discourse. This aligns with global digital authoritarianism, where state actors manipulate algorithms on various platforms to silence political discourse.

Jaskaran Sandhu, in his article "India Is Now The World’s Largest Electoral Autocracy," delves into how legal frameworks like the Unlawful Activities (Prevention) Act (UAPA) have been systematically weaponized to silence dissent. He highlights that the UAPA’s broad definitions and provisions enable the Indian government to arbitrarily designate individuals as terrorists, bypassing judicial processes and criminalizing legitimate activism. This legal framework complements digital strategies like algorithmic bias in targeting activists like Nijjar, showcasing the intersection of state law and technology in suppressing political dissent.

In the recently released Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions report (January 28, 2025), The Honourable Marie-Josée Hogue stated:

"India also uses disinformation as a key form of foreign interference against Canada, a tactic likely to be used more often in the future. Until recently, Canada was trying to improve its bilateral relationship with India. However, the assassination of Hardeep Singh Nijjar, coupled with credible allegations of a potential link between agents of the Government of India and Mr. Nijjar’s death, derailed those efforts. India has repeatedly denied these allegations. In October 2024, Canada expelled six Indian diplomats and consular officials in reaction to a targeted campaign against Canadian citizens by agents linked to the Government of India."

This statement emphasizes the growing role of disinformation as a strategic tool of foreign interference, particularly in the context of deteriorating diplomatic relations. The same report further reveals India’s influence extending beyond its borders, including allegations of clandestine financial support for preferred candidates during Canada’s 2021 federal election. These actions point to a coordinated effort to manipulate both digital and political landscapes in tandem. By shaping electoral narratives while leveraging algorithmic tools to suppress dissent—such as Nijjar’s advocacy—India exemplifies the intersection of foreign interference tactics and digital suppression, which collectively undermine democratic principles on a global scale.

This convergence highlights the pressing need for robust policies that safeguard activism, promote algorithmic transparency, and resist undue influence from state actors seeking to exploit digital platforms for geopolitical advantage.

The Role of Bot Networks

Studies have revealed bot networks linked to pro-India actors that flood social media with anti-Sikh propaganda, using hashtags like #RealSikhsAgainstKhalistanis to delegitimize activism during the Farmers’ Protests. These campaigns amplify false narratives, drowning out legitimate discourse and demonstrating how AI tools can be weaponized in geopolitical conflicts. This emphasizes the urgent need for algorithmic audits and public accountability.

India’s activities primarily target Canada’s Sikh diaspora of approximately 800,000 people, promoting a pro-India and anti-Khalistan narrative. According to the Public Inquiry Into Foreign Interference report, these actions are consistent with classified evidence linking violent criminal activity, including homicides and extortion, to agents of the Indian government. Furthermore, the report identifies India as an emerging cyber threat actor, underscoring the sophisticated digital tactics employed to suppress dissent. Bot networks serve as a key component of this strategy, flooding platforms with disinformation to delegitimize Sikh activism while amplifying propaganda. This digital repression silences legitimate voices within the diaspora and highlights the intersection of algorithmic exploitation and state-sponsored transnational repression. It highlights the critical need for enhanced accountability measures to mitigate the misuse of AI-driven tools in geopolitical conflicts.


Platform Inconsistencies and Global Patterns

Social media platforms often inconsistently enforce moderation policies, disproportionately flagging pro-Sikh content while permitting harmful propaganda. During the Farmers’ Protests, platforms like Twitter and Instagram blocked Sikh-related hashtags but allowed inflammatory ones like #Shoot to proliferate. Such disparities erode trust and disproportionately harm marginalized communities.

These challenges mirror global examples. In Myanmar, Facebook’s algorithms amplified content inciting violence against the Rohingya minority, contributing to atrocities. Similarly, in Russia, digital platforms suppressed LGBTQ+ activism under state pressure. These cases highlight algorithmic failures and reinforce the need for transparency and accountability in digital governance.

ETHICAL AND LEGAL CONCERNS

Algorithmic moderation and foreign interference pose significant challenges, forcing social media platforms to balance free speech with content regulation. The sheer volume of daily content makes comprehensive moderation difficult, and automated systems frequently misclassify content, especially in culturally nuanced contexts. For instance, terms like “Khalistan” are often flagged as extremist in some regions while representing legitimate political discourse in others. Additionally, state-sponsored disinformation campaigns, as noted in the Public Inquiry Into Foreign Interference report, exploit platform algorithms to amplify divisive narratives while suppressing dissent. These automated moderation systems struggle to differentiate between harmful content and political advocacy, exacerbating the suppression of marginalized voices.

Furthermore, platforms also face conflicting legal obligations across jurisdictions, particularly as governments pressure them to restrict content in ways that may contradict international human rights standards. India’s use of anti-terror laws to flag pro-Sikh content exemplifies this tension, creating ambiguity in developing consistent global moderation policies. Further complicating platform accountability, concerns have arisen over personal relationships between tech moguls and political figures. Elon Musk’s close ties with Donald Trump, for example, have raised scrutiny over content prioritization, particularly after X (formerly Twitter) admitted to forcing Musk’s tweets onto users’ timelines. Musk’s frequent amplification of misinformation further illustrates the risks associated with unchecked platform control. While unregulated free speech can spread misinformation and incite violence, over-censorship disproportionately affects marginalized communities.

In particular, platforms like Facebook and YouTube have faced criticism for both failing to regulate misinformation and over-censoring legitimate discussions, as seen during the COVID-19 pandemic. Meanwhile, government pressures continue to influence moderation policies, as evidenced by Jack Dorsey’s claim that India threatened to shut down Twitter during the Farmers’ Protests. These tensions illustrate the urgent need for independent oversight mechanisms to ensure platforms do not compromise free speech for state compliance.

Additionally, the complexity of categorizing activism further complicates moderation efforts. As Jasjit Singh notes, Sikh activism spans humanitarian efforts, diaspora nationalism, and political advocacy, yet simplistic categorizations risk mislabeling legitimate advocacy as extremism. A nuanced, intersectional approaches are essential to avoid silencing voices. While these challenges are significant, they do not absolve platforms of responsibility. Algorithmic audits, culturally informed policies, and independent oversight mechanisms are necessary to balance moderation with free speech while ensuring international collaboration with human rights regulators and civil society.

Algorithmic Bias and Inequality

Algorithms are not neutral; they reflect biases embedded in their training data. Terms like “Khalistan” are often flagged as extremist, silencing legitimate voices while leaving harmful content unaddressed. Shoshana Zuboff’s The Age of Surveillance Capitalism highlights how platforms amplify state narratives while neglecting minority protections. A parallel can be drawn to the Ethiopian conflict, where Facebook’s algorithms failed to curb hate speech and incitement to violence against ethnic Tigrayans. Reports revealed that inflammatory posts, many of which were left unchecked, contributed to an environment of hostility and violence, underlining the detrimental role of algorithmic bias in exacerbating real-world harm.

Algorithms frequently lack contextual understanding, leading to over-censorship. For example, another instance of this would include Tumblr’s 2018 content ban mistakenly flagged benign material, illustrating the limitations of automated moderation without human oversight. Such failures are especially damaging in politically sensitive contexts like Sikh activism.

Freedom of Speech

The suppression of pro-Sikh activism undermines fundamental freedoms of speech and expression. The U.S. State Department’s 2023 Human Rights Report highlights India’s use of platform censorship to stifle dissent. Diaspora nationalism, a vital form of cultural preservation, is often mischaracterized as extremism. This further marginalizes these voices. The Sikh Coalition’s 2024 report further emphasizes how Indian authorities have weaponized digital platforms to silence dissent, illustrating the broader implications of algorithmic moderation that fails to distinguish legitimate advocacy from extremism.

Proposed Legal Solutions

Transparency in Algorithmic Moderation

Social media platforms must prioritize transparency in their moderation practices. The UNGPs advocate for mechanisms like independent audits and public disclosure of moderation policies to ensure fairness and equity. While tools like Facebook’s Ad Library and Twitter’s transparency reports are steps in the right direction, they often lack meaningful stakeholder input and fail to address cultural and geopolitical biases. Collaborating with affected Sikh individuals can help platforms develop culturally informed policies and prevent misrepresentation.

Independent Oversight Bodies

Governments should establish independent oversight bodies to monitor content moderation and address algorithmic bias. These bodies require diverse representatives from civil society, academia, and marginalized communities to ensure equitable governance. The EU’s Digital Services Act provides templates for structuring such oversight. Additionally, public databases of moderation precedents, as suggested by Common, could improve transparency and procedural fairness while fostering user trust. Oversight mechanisms must operate within international frameworks like the International Covenant on Civil and Political Rights (“ICCPR”) to align moderation practices with global human rights standards.

Strengthening Data Protection Laws

Data protection laws are critical to reducing the misuse of social media platforms. By restricting access to sensitive user data, robust frameworks can limit foreign interference and enhance privacy for activists and marginalized communities. Strengthened legislation can prevent state actors from exploiting platforms to target dissenting voices.

Conclusion

Social media platforms must transition from tools of digital repression to facilitators of democratic discourse. The suppression of pro-Sikh activism highlights the broader challenges of algorithmic governance, emphasizing the need for transparency, accountability, and equitable solutions. Addressing these challenges is essential to ensuring technology fosters inclusion and equity rather than exacerbating systemic inequities.

As AI technologies evolve, immediate action is critical to prevent these systems from silencing dissent and undermining democratic principles. Although not explored in detail, this piece proposes—algorithmic audits, independent oversight bodies, and binding international treaties—offer a practical and scalable framework for creating fairer digital ecosystems.

This is not just a policy concern but a pressing human rights issue. Protecting freedom of expression, particularly for marginalized communities, is vital to upholding democracy in a digital age. Achieving this requires global collaboration, ethical governance, and steadfast accountability from both states and platforms.

The future of digital spaces depends on balancing technological innovation with justice and equity. By implementing the measures proposed in this piece, the international community can ensure that technology amplifies the voices of the marginalized and becomes a tool for progress rather than repression.