Daniel Motaung says he developed PTSD when working as a content moderator on Facebook. He sued Meta, the parent company of Facebook, in Kenya since Nairobi was where the regional office from where he worked from was based.
Meta refused, saying the Kenyan courts have no jursidiction to determine the case.
However, earlier this month, the Employment and Labour Relations Court saw it differently. ELRC ruled that Facebook’s parent firm Meta can be sued in Kenya.
“Since the petition has raised certain actual issues that are yet to be determined, it would be inopportune for the country to strike out the two respondents from the matter,” Judge Jacob Gakeri said in his ruling.
The lawsuit by Motaung was also filed against Meta’s local outsourcing company, Sama. It seeks financial compensation, an order that outsourced moderators have the same healthcare and pay scale as Meta employees, that unionization rights be protected and for an independent human rights audit of the office.
This decision from Kenya’s employment and labour relations court could affect how Meta works with content moderators globally. The U.S. company works with thousands of moderators around the world who are tasked with reviewing graphic content posted on its platform.
Meta had argued that the Kenyan court had no jurisdiction because the company is not based in the African country, and thus should be struck from the case.
Not the first time
Meta has faced lawsuits over content moderation before.
In 2021 a California judge approved a settlement worth Kes. 10.7 billion between Facebook and more than 10,000 content moderators. The applicants had accused the company of failing to protect them from psychological injuries resulting from their exposure to graphic and violent imagery.
Meta is also facing another lawsuit in Kenya. In December, two Ethiopian researchers and a Kenyan rights group filed a suit accusing Meta of letting violent and hateful posts from Ethiopia flourish on Facebook. These, they allege, brought about inflammatory sentiments during the Ethiopian civil war.
Meta has said hate speech and incitement to violence are against the rules of Facebook and Instagram, another of its platforms. It also said it was investing heavily to remove this type of content.
Meta’s local outsourcing company Sama has said it would no longer provide content moderation services for Meta.
Regulating the algorithm
Experts now say that with algorithms now responsible for deciding what gets seen by whom, the question is no longer whether platforms are responsible for the content they host, but what responsibility they have in promoting or inhibiting its circulation.
Governments worldwide will also be seen to kickstart debate on how to regulate social media algorithms. This is mainly because social media algorithms don’t operate in isolation, and reducing them to mathematical abstractions obscures the human networks in which they are embedded, which include content moderators’ platforms employ to take down harmful content.
Meta will also have a strong financial incentive to invest in systems that better prevent the spread of harmful material in markets it has been accused of overlooking when it comes to protecting users, from the spread of hate, violence and discrimination.
Data privacy experts also see this court decision opening the floodgates for other technology companies to be sued in Kenya and other countries. This could include OpenAI which, according to a recent exposé by the Times magazine, paid Kenyan workers less than USD 2 to train the artificial intelligence, often requiring them to review explicit and traumatic material.
[Top photo collage: Daniel Motaung (Foxglove) and Meta (Meta.com)]