"Bad Bot Landscape Report": Insight into the rise of advanced persistent bots

This is a contributed piece from Rami Essaid, CEO at Distil Networks


The bad bot landscape continues to evolve rapidly. The Distil Networks 2016 Bad Bot Landscape Reportrevealed a dramatic increase in Advanced Persistent Bots, which have sophisticated capabilities.

Bots enable high-speed abuse, misuse, and attacks on websites and APIs. They enable attackers, unsavoury competitors, and fraudsters to perform a wide array of malicious activities. This includes web scraping, competitive data mining, personal and financial data harvesting, brute force login and man-in-the-middle attacks, digital ad fraud, spam, transaction fraud, and more.

Advanced Persistent Bots (APBs) now make up 88% of bad bot traffic, up from 77% in 2014. Meanwhile, simple bots decreased by more than half, from 23% of bad bot traffic in 2014 to 12% in 2015.


Advanced Persistent Bots exhibit deceptive behaviour including mimicking humans, loading JavaScript and external resources, cookie support, browser automation, and spoofing IP addresses and user agents. APBs are much harder to identify and block than simple bots; they fly under the radar of many existing security solutions.

The persistency aspect comes from their ability to evade detection using tactics such as dynamic IP rotation (from huge IP address pools), using Tor networks and peer-to-peer proxies to obfuscate their origin, and distributing attacks over hundreds of thousands of IP addresses.


Many bots mimic human behaviour

In 2015, our dataset revealed that roughly 40% of bots are able to mimic human behaviour. This makes the case that using tools such as WAFs, web log analysis, or NGFWs—which perform less detailed analysis of clients and their behaviour—will likely result in huge amounts of false negatives.


Bad bot’s ability to load external assets such as JavaScript

Many analytic tools, such as Google Analytics, function via a JavaScript code snippet. If bots can load these resources, they’ll end up skewing analytic tools and throwing off key business and operational metrics. Based on this year’s data, 53% of bad bots will end up falsely attributed as humans in Google Analytics and similar tools. Of course, IT security tools that simply attempt to stop bots based on a simple JavaScript check will also be fooled.



Bots rotate user agents en masse

Not only are the bad guys lying about who they say they are, they’re repeatedly changing their identities over and over again. According to this year’s data, around 36% of bad bots disguised themselves using two or more user agents. The worst APBs changed their identities over 100 times.



Bots rotating IP addresses is now a commonplace tactic

Almost 73% of bad bots rotate or distribute their attacks over multiple IP addresses. Of those, 17% utilized between two to five IP addresses, 9% between six and ten, 19% between 11 and 50, 8% up to 100 IP addresses, and a whopping 20% used over 100 IP addresses in their operations.


What should you worry about the most? It depends on your vulnerability profile, but here’s two examples using ecommerce as an example.


Brute force account takeovers

Say a bad guy has acquired a database of stolen usernames and passwords. They construct a bot that runs all those user credentials against a targeted ecommerce site. Most will fail, but given users’ proclivity for using the same credentials on multiple sites, a few bots will receive a welcome message, validating it as a valid account on that site.

That username/password combination now has a resale value on the black market. Anyone buying it can access as much personal data as the valid account owner has posted—name, mailing address, phone number, email address, and more. In turn, that information may be used to access bank and credit card accounts, date of birth, Social Security number—the list goes on.

Account details such as purchase delivery addresses can also be altered, further impacting existing customers. The operator of the targeted site has not been breached in the traditional sense, but has unwittingly become part of a chain of criminal activity.


Bots Are Used for Brute Force Login Attacks


It costs almost nothing to test credentials for validity. All it takes are a few Amazon or T-1 instances to cycle through a lot of logins, learn which ones are useful, and then sell them on the black market for pennies on the dollar.



Carding is when a credit card is used to charge a very small amount. It’s not enough to alert a verification investigation, but serves to validate that the card is still active and hasn’t been reported as stolen. The card information is now ready for resale, or for direct use by a criminal to purchase a high-value item, thereby defrauding both the owners of the website and the credit card.


Bots and fraudulent credit card purchases


Mitigating bad bots in 2016 and beyond

The bad bot landscape is continuing to evolve rapidly with the dramatic growth in sophisticated obfuscation techniques, and an expanding range of geographic and ISP points of origin. This is a clear challenge to IT security and web infrastructure teams under increasing pressure to forecast infrastructure demands and protect their online data. Without insight and control over bad bot traffic, the challenge is exacerbated.


« Digital archivists who mined Bob Dylan's past spy new seams


Periscope & the clamour of tech brands hiring journalists »
IDG Connect

IDG Connect tackles the tech stories that matter to you

  • Mail


Do you think your smartphone is making you a workaholic?