Polls before and after 2019: Facebook memos signal wave of anti-minorities posts

SEVERAL Facebook internal reports over the past two years have pointed to an increase in “anti-minority” and “anti-Muslim” rhetoric as “a substantial component” of the 2019 election campaign in Lok Sabha. A July 2020 report specifically noted that there had been a marked increase in these posts over the previous 18 months, and that sentiment was “likely to appear” in upcoming Assembly elections, including in Bengal. western.
The increase in hate speech and inflammatory content was mainly centered around the “themes” of threats of violence, Covid-related disinformation involving minority groups, and “false” reports of Muslims engaging in community violence.
Read | 3 memos reported ‘polarizing’ content, hate speech in India, but Facebook said there was no problem
These reports are part of documents disclosed to the United States Securities and Exchange Commission (SEC) and provided to the United States Congress in drafted form by legal counsel to former Facebook employee and whistleblower Frances Haugen.
The redacted versions received by the US Congress have been reviewed by a consortium of global news organizations, including The Indian Express.
In an internal report in 2021 ahead of the Assembly elections in Assam, Himanta Biswa Sarma, now Chief Minister of Assam, was reported to have been involved in trafficking inflammatory rumors that “Muslims were pursuing attacks biologicals against the Assamese people using chemical fertilizers to produce liver, kidney and heart disease in Assamese.
Asked by the Indian Express about it and whether he knew his “fans and supporters” were engaging in hate speech, Sarma said he was “not aware of this development”. When asked if he had been contacted by Facebook reporting the content posted on his page, Sarma replied, “I had not received any communication.”
Another internal Facebook report, titled “Communal Conflict in India,” notes that inflammatory content in English, Bengali and Hindi has increased on several occasions, particularly in December 2019 and March 2020, coinciding with protests against the Citizenship Amendment Act and the onset of blockages. to prevent the spread of Covid-19.
Despite the presence of such content on the platform, the documents reveal, there was a palpable clash between two internal Facebook teams: those reporting problematic content and those designing algorithms to push content to the feed. topicality.
To combat such problematic content, a group of internal staff had, in the July 2020 report, suggested various measures such as the development of “incendiary classifiers” to detect and apply such content in India, improvement of Image text modeling of the platform so that this content can be identified more effectively and by creating “country specific banks for inflammatory content and harmful disinformation regarding countries at risk (ARC)”.
Almost all of these reports place India in the ARC category, where the risk of societal violence from social media posts is higher than in other countries.
According to another Facebook internal report from 2021, titled “India Harmful Networks”, groups claiming to be affiliated with Trinamool Congress engaged in the coordinated publication of instructions via large messaging groups, and then posted these messages on several similar groups with the aim of increasing the audience. for content that is “often inflammatory”, but “generally non-violent”.
The posts of groups affiliated with RSS and BJP, on the other hand, contained a high volume of “love jihad” content with hashtags “related to publicly visible Islamophobic content,” the internal report noted.
Requests sent to BJP, RSS and TMC went unanswered.
Despite all of these red flags, another group of employees at the social media company only suggested a “stronger, time-limited demotion” of such content.
When asked if the social media platform has taken any action to implement these recommendations, a spokesperson for Meta Platforms Inc – Facebook was renamed Meta on October 28 – told The Indian Express: “Our teams were closely monitoring the many possible risks associated with the elections in Assam this year, and we have proactively implemented a number of emergency measures to reduce the virality of inflammatory comments, especially videos. Videos with inflammatory content were identified as high risk during the election, and we have put a measure in place to help prevent these videos from being automatically shown on someone’s video feed. ”
“In addition to our standard practice of removing accounts that repeatedly violate our community standards, we have also temporarily reduced the distribution of content from accounts that have repeatedly violated our policies,” the spokesperson said.
Regarding the increase in hate content, the spokesperson for Meta said that hate speech against marginalized groups, including Muslims, was on the increase around the world.
“We have invested heavily in technology to detect hate speech in various languages, including Hindi and Bengali. As a result, we have halved the number of hate speech people see this year. Today, it has fallen to 0.03%, ”added the spokesperson.
Not only was Facebook made aware of the nature of the content posted on its platform, but it also discovered, through another study, the impact of posts shared by politicians.
In an internal document titled “Effects of Disinformation Shared by Politicians”, examples from India featured as “high-risk disinformation” shared by politicians, resulting in a “societal impact” of “video out of context. stirring up anti-Pakistan and anti-Muslim sentiment ”.
The study noted that users believed it was “Facebook’s responsibility to let them know when their executives share false information.” There was also a debate within the company, according to the documents, about what to do when politicians shared previously debunked content.
(With contributions from LIZ MATHEW)