{"id":17157,"date":"2020-12-26T22:30:25","date_gmt":"2020-12-26T22:30:25","guid":{"rendered":"http:\/\/radiofree.asia\/?guid=915642939e93d138cc744338698069d9"},"modified":"2020-12-26T22:30:25","modified_gmt":"2020-12-26T22:30:25","slug":"youtubes-violation-of-palestinian-digital-rights-what-needs-to-be-done-2","status":"publish","type":"post","link":"https:\/\/radiofree.asia\/2020\/12\/26\/youtubes-violation-of-palestinian-digital-rights-what-needs-to-be-done-2\/","title":{"rendered":"YouTube\u2019s Violation of Palestinian Digital Rights: What Needs to be Done"},"content":{"rendered":"

Palestinians have been reporting increasing digital rights violations by social media platforms. In 2019, Sada Social, a Palestinian digital rights organization, documented as many as <\/span>1,000 violations<\/span><\/a> including the removal of public pages, accounts, posts, publications, and restriction of access. This policy brief examines YouTube\u2019s problematic community guidelines, its social media content moderation, and its violation of Palestinian digital rights through hyper-surveillance.\u00a0<\/span><\/p>\n

It draws on research conducted by <\/span>7amleh - The Arab Center for the Advancement of Social Media<\/span><\/a>, and on interviews with Palestinian journalists, human rights defenders, and international human rights organizations to explore YouTube\u2019s controversial censorship policies. Specifically, it examines the vague and problematic definition of certain terms in YouTube\u2019s community guidelines which are employed to remove Palestinian content, as well as<\/span> the platform\u2019s <\/span>discriminatory practices such as language and locative discrimination. It offers recommendations for remedying this situation.\u00a0<\/span><\/p>\n

In the Middle East, YouTube is considered one of the most important platforms for digital content distribution. Indeed, YouTube user rate in the region increased by 160% between 2017 and 2019, with <\/span>over one million subscribers<\/span><\/a>. Yet little is known about how YouTube implements its community guidelines, including its Artificial Intelligence (AI) technology which is used to target certain content.\u00a0<\/span><\/p>\n

YouTube has <\/span>four main guidelines and policies<\/span><\/a> for content monitoring: <\/span>spam and deceptive practices, sensitive topics, violent or dangerous content, and regulated goods. However, many users have indicated that their content has been removed without falling under any of these categories. This indicates that YouTube is not held accountable for the clarity and equity of its guidelines, and that it can maneuver between them interchangeably to justify content removal.<\/span><\/p>\n

International human rights organization, Article 19, confirms that YouTube\u2019s community guidelines fall below international legal standards on freedom of expression. In its <\/span>2018 statement<\/span><\/a>, Article 19 urged that YouTube be transparent about how it applies its guidelines by providing examples and thorough explanations of what it considers to be \u201cviolent,\u201d \u201coffensive\u201d and \u201cabusive\u201d content, including \u201chate speech\u201d and \u201cmalicious\u201d attacks.<\/span><\/p>\n

A member of another human rights organization, WITNESS, explained how YouTube\u2019s AI technology erroneously flags and removes content that would be essential to human rights investigations because it classifies the content as \u201cviolent.\u201d As<\/span> a case in point, Syrian journalist and photographer, Hadi Al-Khatib, collected 1.5 million videos throughout the years of the Syrian uprising which documented hundreds of chemical attacks by the Syrian regime. However, <\/span>al-Khatib reported<\/span><\/a> that in 2018, over 200,000 videos were taken down and disappeared from YouTube, videos that could have been used to prosecute war criminals.<\/span><\/p>\n

This discrimination is particularly evident in the case of Palestinian users\u2019 content. What is more, research clearly indicates that YouTube\u2019s AI technology is designed with a bias in favor of Israeli content, regardless of its promotion of violence. For example, YouTube has allowed Orin Julie, <\/span>Israeli gun model<\/span><\/a>, to upload content which promotes firearms despite its clear violation of YouTube\u2019s \u201c<\/span>Firearms Content Policy<\/span><\/a>.<\/span>\u201d<\/span><\/p>\n

Palestinian human rights defenders have described YouTube\u2019s discrimination against their content under the pretext that it is \u201cviolent.\u201d According to Palestinian journalist Bilal Tamimi, YouTube violated his right to post a video showing Israeli soldiers abusing a <\/span>twelve year-old boy<\/span><\/a> in the village of Nabi Saleh because it was \u201cviolent.\u201d In the end, Tamimi embedded <\/span>the deleted video into a longer video<\/span><\/a> which passed YouTube\u2019s AI screening, a tactic used to circumvent the platform\u2019s content removals.<\/span><\/p>\n

More specifically, Palestinian human rights defenders reported experiencing language and locative discrimination against their content on YouTube. That is, YouTube trains its AI algorithms to target Arabic-language videos disproportionately in comparison to other languages. In addition, YouTube\u2019s surveillance AI machines are designed to flag content emerging from the West Bank and Gaza. And the more views Palestinian content receives, the more likely it will be surveilled, blocked, demonetized, and likely removed.\u00a0<\/span><\/p>\n

To counter these discriminatory practices and protect Palestinian activists, journalists, and human rights defenders on YouTube, the following recommendations should be implemented:<\/span><\/p>\n