Council for Responsible Social Media

The Council for Responsible Social Media is a nonpartisan group of concerned citizens, eminent Kenyans and civil society organizations coming together to defend our country under attack and to hold social media platforms to higher standards.

We are deeply concerned about the harm social media platforms pose to the health and safety of Kenyans and are stepping up to expose the challenges and guide a public conversation on sensible solutions.

As a priority, we are profoundly alarmed about the recent findings from the Institute for Strategic Dialogue (ISD) that Meta failed to stop the spread of hateful terrorist content in East Africa. It was revealed that known terrorist organizations have created a highly coordinated network to promote online propaganda and target Kenyans with extremist content, attempting to radicalise the youth and spread harmful narratives that undermine our elections, with specific calls to take up arms and reject human rights and our democracy.

This disturbing content violates Facebook’s terms of service and paints a vivid and grim picture of the massive failure by Facebook to regulate harmful content contrary to their safeguarding and content moderation policies, as was also highlighted in Times’ Inside Facebooks’s Africa’s Sweatshop story. This content moderation failure is one of the reasons why Facebook is also under scrutiny for its role in Ethiopia’s conflict where it was used by militias to seed calls for violence against ethnic minorities. Social media platforms are operating unchecked in Africa. These companies prioritize content moderation in English, but they are woefully underserved when it comes to vetting mis and disinformation in the African languages. This must change.

This lack of effective content moderation poses substantive harm to social media users in Kenya and Africa at large. there is therefore an urgent need to demand that social media platforms pay more attention to Africa and put adequate content moderation policies in place and invest properly in platform safety. Kenyan authorities and regulators must prevent companies from profiting from harm and be more accountable and transparent.

We call on the ICT Ministry and Communications Authority of Kenya to actively encourage companies to develop and publicly sign a self-regulatory Code of Practice on Disinformation. The Code should contain explicit public commitments to take down illegal, malicious and hateful content and actively mitigate the risks of disinformation, and perhaps most importantly, make data available to independent researchers to very that the Code of Practice is being enforced by the companies.

A new force is needed to convene conversations with Kenyan policymakers, the peace and security sector, health practitioners, faith leaders, civil society, tech experts and the media, to elevate mainstream voices of concern and focus on achievable fixes to online extremism and the other harms brought by digital platforms. The Council for Responsible Social Media is a voluntary effort of concerned Kenyans who are committed to protecting our digital democracy, decency, and dignity.

We are standing up to the big tech to demand better for Kenya.

Stand with us.

Council for Responsible Social Media Calls  for a Dialogue on Big Tech Harms

On the eve of the Kenyan election – in the spirit of honouring our ability to come together to make a

decision about who will represent us in our 13th Parliament – the Council for Responsible Social

Media would like to express its deep concern about the harm social media platforms pose to the

health and safety of Kenyans. As the current debate rages on the impact of social media on elections

in Kenya, a watchful eye is needed throughout this critical election period to determine how foreign-

operated tech companies have impacted our elections and influenced the future discourse in society.


It is essential that discussions about social media platform accountability are not based on a binary

choice – to ban or not to ban Facebook, or any platform operating in Kenya. This calls for a measured

response to the current debate. Kenya must lead an evidence-based discussion to shine a spotlight

on the contributions of tech companies in the spreading of harmful and violative content on their

platforms, while also giving equal weight to the consideration of reasonable solutions, informed by

the input of all sectors of society.


The Council stands ready to guide a public conversation on sensible remedies that protect freedom

of expression, while also recognizing that social media platforms operating in Kenya do not currently

protect freedom of expression when they fail to properly curtail hate speech and terrorist content

found on their platforms.


The Council would like to propose concrete steps that can be taken to strengthen information

integrity and build resilience against the harmful spread of fake news, mis/dis information, and

terrorist information operations. This is a much needed and late coming conversation in Kenya.

Critical points for future discourse must move beyond platitudes about free expression and should look for ways to meaningful address:

● A Broken System. This lack of effective content moderation through the Kenyan election
cycle has posed substantial harm to social media users in Kenya, citizens seeking credible
information about the election, and Africa at large. Social media platforms must pay more
attention to Africa and put adequate content moderation policies in place and invest
properly in platform safety, as evidenced by the recent Global Witness investigation on hate
speech ads. Kenyan authorities and regulators must prevent companies from profiting from
harms and incentivize transparency and accountability in the future.

August 8, 2022
● Global inequities – different rules for the Global North vs. the Global South. The evidence
throughout the election cycle (here, here, here and here) demonstrates that social media
platforms are operating unchecked in Africa. These companies prioritize content moderation
in English, but they are woefully underserved when it comes to vetting mis and
disinformation in the African languages. The publication of the Facebook Papers in 2021
confirmed what activists, journalists, and civil society organizations in the Global South had
been alleging for years: that Meta/Facebook chronically underinvests in non-Western
countries, leaving millions of users even more exposed to disinformation, hate speech, and
violent content. Outright bans, however, will not improve the commitment to accountability
and transparency in the Global South.
● Terrorist content oversight. We remain profoundly alarmed about the recent findings from
the Institute for Strategic Dialogue (ISD) that Meta failed to stop the spread of hateful
terrorist content in East Africa. It was revealed that known terrorist organizations have
created a highly coordinated network to promote online propaganda and target Kenyans
with extremist content, attempting to radicalise the youth and spread harmful narratives
that undermine our elections, with specific calls to take up arms and reject human rights and
our democracy. It is difficult for the Council to believe Meta’s verbal commitments to
upholding the integrity of information around the election, and implement their own terms
of service to curtail hate speech, when something as obvious as terrorist content was
undiscovered by the company.
● Inadequate training and support for content moderators: Terrorist content violates Meta’s
own terms of service and paints a vivid and grim picture of the massive failure by Facebook
to regulate harmful content contrary to their own safeguarding and content moderation
policies, as was also highlighted in Times’ Inside Fakebook’s Africa’s Sweatshop story. This
content moderation failure is one of the reasons why Facebook is also under scrutiny for its
role in Ethiopia’s conflict where it was used by militias to seed calls for violence against
ethnic minorities.
● Longer-term solutions, consideration of a Code of Practice for Africa. With more inputs
from the public and practitioners, the ICT Ministry and Communications Authority of Kenya
could require companies to develop and publicly sign a self-regulatory Code of Practice on
Disinformation. The Code could contain explicit public commitments to take down illegal,
malicious and hateful content and actively mitigate the risks of disinformation, and
importantly, make data available to independent researchers to verify these requirements
are enforced by the companies.
● Public inputs are essential. Any major changes in social media oversight enacted by the
government or Parliament should be informed by input, stemming from a process of
engagement in advance. A new force is needed to convene conversations with Kenyan
policymakers, the peace and security sector, health practitioners, faith leaders, civil society,
tech experts and the media, to elevate mainstream voices of concern and focus on
achievable fixes to online extremism and the other harms brought by digital platforms.