WhatsApp on Wednesday said reports sent to it by users flagging spam and abuse do not undermine the end-to-end encryption of the messaging platform.
WhatsApp's clarification came in response to a report by non-profit newsroom ProPublica that said even though WhatsApp says it does not see user content, the Facebook-owned company has an extensive monitoring operation and regularly shares personal information with prosecutors.
"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat. This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption," a WhatsApp spokesperson said.
The spokesperson added that in India, in accordance with the government's IT rules, it also publishes monthly reports that contain details of how WhatsApp keeps users safe and prevents abuse on the platform on the basis of these user reports.
"WhatsApp remains deeply committed to privacy and user-safety," the spokesperson said.
In the past too, there have been instances when concerns were raised about the privacy of conversations on WhatsApp.
The company, on its part, has always maintained that all messages and calls on the platform are end-to-end encrypted and that it has no visibility on the content.
WhatsApp, in its latest compliance report, said that it has banned over three million Indian accounts while it has received 594 grievance reports in the June 16-July 31, 2021 period.
The 594 user reports spanned across account support (137), ban appeal (316), other support (45), product support (64) and safety (32) during June 16-July 31. During this period, 74 accounts were "actioned", as per the report.
"Accounts Actioned" denotes reports where the platform took remedial action based on the report. Taking action denotes either banning an account or a previously banned account being restored as a result of the complaint.
The company has previously said that apart from "behavioural signals" from accounts, it relies on available unencrypted information, including user reports, profile photos, group photos, and descriptions, as well as advanced AI tools and resources to detect and prevent abuse on its platform.