Statement on Meta Cases in Kenya
May 22nd 2023
Time to end structural racism by Big Tech in Africa; supporting Court cases against Meta in Nairobi.
The rising cases against Meta’s Facebook in the Kenyan courts point to a growing realization of the
magnitude of social media harms perpetrated by an ungoverned, unregulated and underinvested
social media space in Kenya and Africa at large characterized by discriminative algorithms, market
dominance abuse and exploitation of workers.
Big Tech companies have generally operated under two principles; that platforms are self-regulated
and decide what content to keep and what to take down, free from government and other external
oversight. Secondly, that they are benign actors and cannot be held liable for what their users post
online and therefore shield them from any legal liability over content appearing on their platforms.
This has given rise to the proliferation of online harmful content including hate speech, extremist
and genocidal content all which have contributed to the viral spread of hate and violence, especially
in fragile democracies. The viral content increases user engagement and thus, the platforms’ bottom
line and is seen as good for business.
In recognition of the dangers and harms that emanate from this unregulated space, there are
various efforts globally to hold the platforms to account and ensure that their products do not
violate human rights. These efforts include legislation, litigation, regulation, advocacy, education
and awareness raising. At every stage however, there is pushback from Big Tech, especially for those
measures like legislation and litigation that threaten their ability to operate in an ungoverned
environment, and by extension, their profits.
A big part of their counter measures is to insist on selfregulation. It is however increasingly evident that self-regulation does not work, more so in Africa
for several reasons, including gross underinvestment as part of structural racism.
In Kenya, three landmark cases have been filed against Meta in the last twelve months. The cases
revolve around the question of whether Meta can be held liable under the Kenyan legal system
when they cause harm in the country. Two of the cases hinge on poor working conditions of content
moderators, accusing Meta of forced labor, human trafficking, and union busting in the county.
Meta is accused of subjecting the insufficient content moderators to inhuman working conditions
with menial pay and adverse mental health conditions. The third case, filed by two Ethiopians and
Katiba Institute, accuses Meta of failing to invest adequately in its safety systems and failing to act
when threats against people’s lives were made on their platform. As a result, its social media
algorithms have amplified hateful and inciting content, leading to attacks and death during
Ethiopia’s war fought in the Tigray region.
At the heart of the three cases in Nairobi is the question of Big Tech accountability in Africa. Global
tech companies accord people and users unequal rights and protection depending on their status,
power, nationality and language through a system of structural racism that dictates how platforms
allocate resources in their global operations. Within this ranking and reminiscent of the colonial
project, the African continent is at the end of the line and often viewed as a fringe market for
extraction. The Nairobi Meta hub, the centre of content moderation in Eastern and Southern Africa
is inadvertently the home of this neo-colonial exploitation, enabling the disproportionate
exportation of harm to Africans.
To enforce Big Tech accountability in Kenya and Africa at large and guarantee safety and protection
of social media users across the Continent, the Council for Responsible Social Media
calls for the following;
- Support by government for fair wage and working conditions consistent with national and
global standards through inter alia, backing the unionization of content moderators in
- An end to self-regulation through more transparency, inclusion and public oversight of the
algorithms used by Big Tech
- Social media platforms must pay more attention to Africa and put adequate content
moderation policies in place and properly invest in platform safety
- For civil society, regulators and other stakeholders to ensure that Big Tech companies abide
by established laws in Kenya and global normative frameworks
- Greater public conversations and awareness on the role and impact of Big Tech in the lives
of users and Kenyans at large
- The Ministry of Information, Communication and the Digital Economy in collaboration with
the Communications Authority of Kenya and other relevant bodies to require the social
media platforms to publicly commit through a Code of Practice to take down illegal,
malicious and hateful content and actively mitigate the risks of disinformation
- Social media companies must make data available to independent researchers to verify that
safety requirements are enforced effectively
- Continued strengthening of the regulatory environment both in Kenya and at regional
levels, through development of an Africa wide Code of Practice for Big Tech
The current cases against Meta in Nairobi offer an opportune moment for Kenyans and Africans at
large to engage and start reshaping the digital ecosystem in a way that demands and enhances
human rights and dignity and rebuffs the global marginalization of Africa.
The Council for Responsible Social Media is a nonpartisan group of concerned citizens, eminent
Kenyans and civil society organizations working together to ensure the protection, safety and dignity
of social media users in Kenya through among other pathways, holding Big Tech accountable.