WhatsApp Encryption Claim For Messages NOT True: Report

WhatsApp encryption provided to users when they send their messages is not really true, says ProPublica.

A new report claims that WhatsApp messages are not end-to-end encrypted and alleges that Facebook views the content of messages sent on WhatsApp.

A ProPublica report claims that Facebook has been marketing WhatsApp’s end-to-end encryption since 2016, the year that this feature came into effect. The claims made in the report are based on observations of 1,000 contract workers at WhatsApp who reportedly examine millions of user content.

The report says that these workers have access to special Facebook software to check private WhatsApp messages, videos, and images. However, the report only cites messages that a user has reported. “These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute,” the report adds.

This makes one thing clear though, that the ProPublica report is essentially talking about conversations that have been flagged in one-to-one WhatsApp chats. What the report also does not mention is that WhatsApp only forwards the last five messages to the content moderators and they do not get access to the entire chat history.

The report also mentions that these content moderators are based in Austin, Singapore, Dublin, and Texas and are tasked to examine reported chats.

A tweet shared by WABetaInfo includes a screenshot that shows exactly what happens when someone reports a conversation. The screenshot shared shows an old pop-up version 2.21.18.9 that states – “The most recent messages from this contact will be forwarded to WhatsApp. This contact will not be notified.” There is also a second pop-up from a newer version of the app that says – “The last 5 messages from this contact will be forwarded to WhatsApp. If you block this contact and delete the chat, it will be deleted from this device only. The contact will not be notified.”

ProPublica has also mentioned in the report that WhatsApp’s internal marketing presentation from last year emphasized the fierce promotion of WhatsApp’s privacy narrative and that the company compares its brand character to the Immigrant Mother.

The report adds that Carl Woog, WhatsApp’s communications director, has acknowledged that a team of contractors in Austin and a few other places reviews WhatsApp messages to “identify” and remove “the worst” abusers on the platform. According to ProPublica, being a content moderator for WhatsApp is almost the same as being a moderator for Facebook and Instagram.

“Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images, and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the ‘report’ button on the app, identifying a message as allegedly violating the platform’s terms of service,” the report states.

Also Read:

“We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication,” Facebook said in a response to ProPublica’s report.

“Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp, we receive the content they send us,” the company added.

WhatsApp faced a significant amount of backlash earlier this year when its new privacy policy was announced that made sharing data between Facebook and WhatsApp easy. However, this data sharing was only limited to business chats and not personal chats.

Latest Posts

Related Articles