WhatsApp says it bans 2 million accounts monthly for automated behaviour and bulk messaging

WhatsApp says it bans 2 million accounts monthly for automated behaviour and bulk messaging
HIGHLIGHTS

WhatssApp’s new white-paper goes into the specifics of how the company fights spam, bulk and automated messages. The company claims that approximately 90% of the messages sent on WhatsApp are from one person to another, and that majority of groups have fewer than ten people.

Highlights:

  • WhatsApp releases new white-paper on how it deals with bulk messaging and automated behavious.
  • WhatsApp says it removes 2 million accounts monthly.
  • Roughly 20% accounts of these 2 million are sifted at the time of registration.

 

Quick on the heels of Safer Internet Day, February 5, WhatsApp has released what it calls a white-paper on how it deals with Bulk messaging and Automated behaviour. The white-paper from the Facebook-owned company details a lot of what we already know. For instance, we already know messages on the platform are claimed to be end-to-end encrypted, even though the platform has been questioned on the legitimacy of that claim several times (read here and here). The paper goes onto stress about how WhatsApp is not “a broadcast platform”. “Approximately 90% of the messages sent on WhatsApp are from one person to another, and the majority of groups have fewer than ten people,” WhatsApp claims. The paper also talks about existing features such as Forwarded Labels, two-step verification, and suspicious link detection.

Beyond what we and most WhatsApp users already know, the platform shared some of the practices and techniques it employs to keep bulk and automated messaging at bay. WhatsApp says it bans 2 million accounts on a monthly basis on account of automated behaviour and bulk messaging. The company says 75% of these accounts are removed using machine learning and without a user reporting them. WhatsApp says it carries out checks for automated behaviour and bulk messaging in three stages – at registration, during messaging, and in response to negative feedback.

“We have advanced machine learning systems that take action to ban accounts, 24 hours a day, 7 days a week,” WhatsApp wrote in its paper. At the registration stage, WhatsApp says that its machine learning algorithms are able to detect if a similar phone number has been recently abused or if the computer network used for registration has been associated with suspicious behavior. “Roughly 20% of account bans happened at registration time,” claims WhatsApp.

At the messaging stage, WhatsApp says the platform has noticed that “normal users operate relatively slowly on WhatsApp, tapping messages one at a time or occasionally forwarding content. The intensity of user activity can provide a signal that accounts are abusing WhatsApp”. Since WhatsApp is advertised as an end-to-end encrypted service, the company says it does not go probing into the contents of users’ messages. Instead, it uses behavioural indicators and users’ reports to detect suspicious activity.

“For example, an account that registered five minutes before attempting to send 100 messages in 15 seconds is almost certain to be engaged in abuse, as is an account that attempts to quickly create dozens of groups or add thousands of users to a series of existing groups. We ban these accounts immediately and automatically. In less-obvious situations, a new account might message dozens of recipients who do not have the sender’s account in their contacts. This could be the beginning of a spam attack, or it could be an innocent user simply telling their contacts about a new phone number. In these cases, we consider historical information (for instance, how suspicious their registration was) in order to separate abnormal — but innocuous — user behavior. In sum, our detection systems evaluate hundreds of factors to shut down abuse”

WhatsApp’s systems also target automation. For instance, the ‘typing’ symbol when someone is writing a message can be circumvented by spammers attempting to automate messaging. WhatsApp says spammers may not have the technical expertise to duplicate the ‘typing’ indicator.

On the feedback front, WhatsApp says,”When a user sends a report, our machine learning systems review and categorise the reports so we can better understand the motivation of the account, such as whether they are trying to sell a product or seed misinformation.”

Seems like WhatsApp is trying its best to ensure users that the platform is safe and non-intrusive. Last year was not a good one for WhatsApp when the platform was under constant scrutiny for being a cesspool of fake news. Since then, WhatsApp introduced multiple new features and carried out various informative programmes in countries like India, to educate users about how to spot fake news.

Digit NewsDesk

Digit NewsDesk

Digit News Desk writes news stories across a range of topics. Getting you news updates on the latest in the world of tech. View Full Profile

Digit.in
Logo
Digit.in
Logo