Facebook has released a statement touching on the deletion of groups, revealing details that have previously been hidden from the public.
The social media platform has revealed that in addition to 150 moderators proficient in 30 languages, it uses Artificial Intelligence (AI) to keep harmful content off Facebook hence the deletion of suspicious groups.
The 150 moderators are drawn from various professions including academic experts on counterterrorism, former prosecutors, former law enforcement agents, analysts and engineers.
They are charged with the responsibility of reviewing content to get the right context for it then they determine whether the content is harmful and destructive to peace.
The AI scrutinises content in relation to Facebook's terms of use by identifying content that clearly violates the terms such as; photos and videos of beheadings or other gruesome images then it stops users from uploading them to the site.
The details shed light on events that may have led to the deletion of one of Kenya's most popular groups, Group Kenya.
The group that was deleted about two weeks shocked its more than 2.2 million members who took to social media to express their disappointment.
The detailed statement outlined that part of the platform's mission is to keep its community safe and for that reason, and in specific that terrorists ought to be denied a voice.
"Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them.
"When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny," the statement read in part.
[caption caption="Part of the Statement released after deletion of Group Kenya."]
[/caption]
[caption caption="Continuation of the satement released after Group Kenya was deleted."]
[/caption]