- ProPublica reports that Facebook hired more than 1,000 workers around the world to sift through messages on WhatsApp despite previous claims by both companies that they would not have access to users’ data
- WhatsApp, the global messaging app with more than 2 billion users, was built upon the promise of data encryption that would keep messages hidden
- The app’s leadership say there is no problem and that the company collects the info of inappropriate messages to share with law enforcement
- WhatsApp had helped prosecutors gather the messages between BuzzFeed and U.S. Treasury employee Natalie Edwards, who had detailed how dirty money cycles through banks and is currently serving a six-month sentence in jail
WhatsApp’s promise of private messages with end-to-end encryption appears to have been false, an investigation revealed.
When Facebook purchased the popular messaging app for $19 billion in 2014, both companies assured users that their data could not be accessed by either company.
But Facebook not only hired 1,000 workers to sift through millions of messages on WhatsApp, which has two billion users around the world, but it also shared some of those messages with law enforcement and the U.S. Department of Justice to help put people in prison, ProPublica claims.
‘These hourly workers use special Facebook software to sift through streams of private messages, images, and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems,’ the report detailed.
‘These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.’
Will Cathcart, Head of WhatsApp, said the news was a non-issue.
‘I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,’ Cathcart said.
WhatsApp had helped prosecutors build a high-profile case
against Natalie Edwards, a U.S. Treasury Department employee, who allegedly leaked confidential documents to BuzzFeed on how dirty money flows through U.S. banks, according to ProPublica.
Edwards was sentenced to six months in prison after pleading guilty to a conspiracy charge. He began serving her sentence in June.
The report also found more than a dozen instances where data from WhatsApp was used to put others in jail since 2017.
WhatsApp Director of Communications, Carl Woog, told ProPublica that Facebook had hired the employees to identify and remove ‘the worst’ abusers from the platform, but said he agrees with Cathcart and does not consider the work to be content moderation.
‘The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse,’ WhatsApp said in a statement.
WhatsApp users appeared unfazed by the news, tweeting that it was no surprise that a large tech company owned by Facebook would monitor user messages.
One user wrote, ‘I thought we all knew what Facebook was doing?’
‘None of these services are truly private. Don’t believe that. In the end they all abuse their powers,’ another Twitter user wrote.
People tweeted that they were not surprised by the fact that WhatsApp was sharing data
Will Cathcart, left, discussing his platform with the Australian Strategic Policy Institute in July. He discussed how it flags possible child-exploitation imagery but not that the data could be sifted by employees hired by Facebook
Facebook claims that messages are only examined when it’s flagged to have inappropriate content and that personal calls and other messages are still kept out of reach from the company.
An unnamed whistleblower had filed a complaint last year with the U.S. Securities and Exchange Commission, alleging that WhatApp’s boasts of protecting users’ privacy and data were false.
The SEC has said they have not seen the complaint and have not taken action against the issue.
While Facebook has refrained from detailing how it monitors WhatsApp posts, it openly publishes the actions that it takes on its own service and Instagram.
The company has said that some 15,000 moderators exist to filter the millions of posts on the two platforms.
From April to June alone, the company has taken down more than 32 million posts depicting adult nudity and sexual activity on Facebook. At that same time, it removed 28 million posts depicting abuse and exploitation of a child.
It also took action against more than 1.8 million of these types of posts on Instagram.
Facebook has a 95% rate of handing over at least some data from its users when requested by law enforcement.
On WhatsApp, Cathcart has said that it reported about 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020.
During an interview with the Australian Strategic Policy Institute, Cathcart had attributed the reports to the platforms AI and users who flag the content but made no mention of the private contracts who would have examined the posts.