“In accordance with the IT Rules 2021, we’ve published our sixth monthly report for the month of November. This user-safety report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp’s own preventive actions to combat abuse on our platform,” a WhatsApp spokesperson said in a statement
New Delhi:
Meta-owned WhatsApp on Saturday said that it banned 1,759,000 accounts in India in November in compliance with the IT Rules 2021.
WhatsApp also received 602 grievance reports in the same month from the country, and took action on 36 of those.
“In accordance with the IT Rules 2021, we’ve published our sixth monthly report for the month of November. This user-safety report contains details of the user complaints received and the corresponding action taken by WhatsApp, as well as WhatsApp’s own preventive actions to combat abuse on our platform,” a WhatsApp spokesperson said in a statement.
“As captured in the latest Monthly Report, WhatsApp banned over 1.75 million accounts in the month of November,” the spokesperson added.In October, the platform had banned over 2 million accounts in India.
WhatsApp has more than 400 million users in India.
“Over the years, we have consistently invested in Artificial Intelligence and other state of the art technology, data scientists and experts, and in processes, in order to keep our users safe on our platform,” the company said.
On the other hand, Meta said that over 16.2 million content pieces were “actioned” on Facebook across 13 violation categories proactively in India during November.
Photo sharing platform Instagram took action against over 3.2 million pieces across 12 categories during the same period proactively.
The new IT rules — which came into effect in May last year — require large digital platforms (with over 5 million users) to publish compliance reports every month, mentioning the details of complaints received and action taken.
“We will continue to bring more transparency to our work and include more information about our efforts in future reports,” said WhatsApp.
Meanwhile, Google, in its latest report, said it had received 26,087 complaints in the month of November (November 1-30, 2021) from individual users located in India via designated mechanisms, and the number of removal actions as a result of user complaints stood at 61,114.
These complaints relate to third-party content that is believed to violate local laws or personal rights on Google’s significant social media intermediaries (SSMI) platforms, the report said.
“Some requests may allege infringement of intellectual property rights, while others claim violation of local laws prohibiting types of content on grounds such as defamation. When we receive complaints regarding content on our platforms, we assess them carefully,” it added.
The content removal was done under several categories, including copyright (60,387), trademark (535), circumvention (131), court order (56) and graphic sexual content (5).
Google explained that a single complaint may specify multiple items that potentially relate to the same or different pieces of content, and each unique URL in a specific complaint is considered an individual “item” that is removed.
For user complaints, the “removal actions” number represents the number of items where a piece of content was removed or restricted during the one-month reporting period as a result of a specific complaint, while for automated detection, the “removal actions” number represents the number of instances where Google removed content or prevented the bad actor from accessing the Google service as a result of automated detection processes.
Google said in addition to reports from users, the company invests heavily in fighting harmful content online and use technology to detect and remove it from its platforms.
“This includes using automated detection processes for some of our products to prevent the dissemination of harmful content such as child sexual abuse material and violent extremist content.
“We balance privacy and user protection to: quickly remove content that violates our Community Guidelines and content policies; restrict content (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines or policies,” it added.
Google said automated detection enables it to act more quickly and accurately to enforce its guidelines and policies. These removal actions may result in removing the content or terminating a bad actor’s access to the Google service, it added.
Also social media giant Meta said over 16.2 million content pieces were “actioned” on Facebook across 13 violation categories proactively in India during the month of November.
Its photo sharing platform, Instagram took action against over 3.2 million pieces across 12 categories during the same period proactively, as per data shared in a compliance report.
Between November 1-30, Instagram received 424 reports through the Indian grievance mechanism. Facebook’s parent company recently changed its name to Meta.
Apps under Meta include Facebook, WhatsApp, Instagram, Messenger and Oculus. As per the latest report, the over 16.2 million content pieces actioned by Facebook during November included content related to spam (11 million), violent and graphic content (2 million), adult nudity and sexual activity (1.5 million), and hate speech (100,100).
Other categories under which content was actioned include bullying and harassment (102,700), suicide and self-injury (370,500), dangerous organisations and individuals: terrorist propaganda (71,700) and dangerous organisations and individuals: organised hate (12,400).
Categories like Child Endangerment – Nudity and Physical Abuse category saw 163,200 content pieces being actioned, while Child Endangerment – Sexual Exploitation saw 700,300 pieces and in Violence and Incitement category 190,500 pieces were actioned.