Facebook hesitated to restrict controversial user-generated content in India
NEW DELHI, India (AP) – Facebook in India selectively suppresses hate speech, disinformation and inflammatory messages, especially anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees question the company’s motivations and interests …
From a recent March survey to company memo from 2019, the company’s internal documents for India highlight Facebook’s ongoing struggle to keep offensive content out of its platforms in the world’s largest democratic marketplace and the company’s largest growing market. Intercommunal and religious tensions in India have a history of boiling over on social media and fueling violence.
The files show that Facebook has been aware of these issues for years, raising questions about whether it has done enough to address these issues. Many critics and digital experts say this has not happened, especially in cases where members of Prime Minister Narendra Modi’s ruling party, Bharatiya Janata, or the BJP, are involved.
All over the world, Facebook is becoming more and more important in politics, and India is no exception.
Modi is credited with using the platform to benefit his party during the election, and last year’s The Wall Street Journal questioned Facebook selectively enforcing its hate speech policy to avoid backlash from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg exuded friendliness, as evidenced by the 2015 image as they hugged at Facebook headquarters.
The leaked documents include many internal company reports on hate speech and disinformation in India. In some cases, much of this has been enhanced by its own “recommended” features and algorithms. But they also include the concerns of company employees about the wrong solution to these problems and their dissatisfaction expressed by viral “discontent” on the platform.
According to the documents, Facebook ranked India as one of the “most at risk countries” in the world and identified Hindi and Bengali as priorities for “automating hate speech.” However, Facebook didn’t have enough local language moderators or content tagging to stop the misinformation that sometimes led to violence in the real world.
AP Facebook said in a statement that it “has invested heavily in technology to find hate speech in different languages, including Hindi and Bengali,” resulting in a “cut in the number of hate speech people see in half” in 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise all over the world. Therefore, we are improving law enforcement and striving to update our policies as hate speech spreads on the Internet, ”said a company spokesman.
This AP story, along with others published, is based on disclosures made by the Securities and Exchange Commission and provided to Congress in an edited form by former Facebook employee turned whistleblower legal counsel Francis Haugen. Edited versions were received by a consortium of news organizations including AP.
Back in February 2019, ahead of the general election, as fears of misinformation were growing, a Facebook employee wanted to understand what a new user in the country sees in their news feed if all they do is follow pages and groups exclusively recommended platform. myself.
An employee created a test user account and maintained it for three weeks, during which India was rocked by an extraordinary event – a militant attack in disputed Kashmir killed more than 40 Indian soldiers, leaving the country on the brink of war with rival Pakistan.
In a post titled “Indian Test User Descending into a Sea of Polarizing, Nationalist Messages,” the edited staff member said they were “shocked” by the content that filled the news feed, which “became an almost constant flurry of polarization. nationalist content, disinformation, violence and bloodshed. “
The seemingly harmless and harmless groups recommended by Facebook quickly morphed into something completely different, with hate speech, unverified rumors and viral content circulating.
Featured groups were filled with fake news, anti-Pakistani rhetoric and Islamophobic content. Most of the content was extremely graphic.
One of them contained a man holding the bloody head of another man, covered with a Pakistani flag, with an Indian flag in place of his head. Its Featured on Facebook featured a lot of unverified content related to India’s retaliatory strikes against Pakistan after the bombings, including an image of a napalm bomb from a video game denied by one of Facebook’s fact-checking partners.
“Following this test user’s news feed, I have seen more images of dead people in the past three weeks than in my entire life,” the researcher wrote.
This raised deep concern about what such controversial content might lead to in the real world, where local news at the time reported attacks on Kashmiris.
“Should we, as a company, have an additional responsibility to prevent integrity damage caused by featured content?” – asked the researcher in his conclusion.
There was no answer to this question in the memorandum circulated to other employees. But he showed how the platform’s own algorithms or default settings played a role in fueling such discontent. The staff member noted that there are clear blind spots, especially in “local language content”. They said they hope these findings will spark conversations about how to avoid such “integrity damage,” especially for those “significantly different” from the typical US user.
Although the study was conducted over three weeks, which was not an average, they acknowledged that it did show how such “unmoderated” and problematic content “can completely take over” during a “major crisis event.”
A Facebook spokesman said the test study “inspired a deeper and more in-depth analysis” of its recommendation systems and “helped drive product changes to improve them.”
“Separately, we are continuing our work to curb incitement to hatred, and we have further strengthened our hate classifiers to include four Indian languages,” the spokesman said.
Associated Press author Sam McNeill from Beijing contributed to this report.
See the full description of Facebook Docs here: https://apnews.com/hub/the-facebook-papers