TL;DR - Last week i'm dealing with Ecommerce customer that was hit HARD with DDoS against his site. When customer was talked with his suppliers he understand that they are hit too and some other competitors in same niche.
Now long story:
On saturday near lunch time i got call from customer, SMS and Viber notification about something "URGENT!". After 15 mins i was on computer and saw that all CPU cores are at 100% combined with huge network traffic. It was easy to see DDoS pattern because all requests comes with POST to home page. Because i was busy with other things that require my physical attention i suggest to BO (business owner) to activate CloudFlare "Under Attack"settings to minimize damage. This settings can be see on CF control panel on top right. One check and protection is applied. Now every visitor need to click on CF Captcha before visit site. BO agrees, we applied that and he make an test purchase to verify that site can be accessible and visitors can purchase something from his website. We agrees both that on each hour or two should take a look on server and CF just to see that things are in control and hear each other on monday morning to disable that protection.
On weekend things are in control and monday morning i remove this protection. Within 2 hours i saw first sign of new attack. Within 5 mins i applied custom WAF protection to block requests with single user agent (outdated version of Chrome). This blocks effective all of attackers. Within hour they change tactics and start to using multiple user agents and while mine protection still works now most of requests wasn't blocked. So i make another WAF rule to stop HTTP POST requests to single URL. Then they start to make GET requests on homepage - and new protection was applied there. This time we cache all HTML files on CF network that comes w/o cookies. Attackers do their job for few hours, but site was still working without single issue. CloudFlare was able to cache 99.96% of requests. Meanwhile i start to see pattern - they start new waves on attack on every hour. So if i'm around computer at :01 or :02 i can see and react against them.
But Monday show that Weekend's attack was weak. And we are prepared for Tuesday.
So on Tuesday we was experienced much heavy attack than Monday - on peak there was over 17M requests per hour or rough around 5000 per second! But CF take all heavy load from attackers. They change few times attacking methods, but with WAF i quickly block them. On Wednesday - they lost interests, but we was on duty checking each hour for sign of attack. So far nothing happens.
If you gonna ask me - how this is related to Technical SEO? Well - this is pre-Christmas period and someone was engaged to take down all competitors. So if you open Google to search for XYZ and if first site in SERP is down, second is down then you can purchase from 3rd or even 4th.
And now few screenshots:
This is GSC time of HTML crawls:
First peak was on Saturday.
Now crawl of images that are static assets:
It's easy to see peak on Saturday where server was overloaded.
And few CloudFlare dashboard:
First day here is Friday - nothing special but next day is Saturday. And that 3M requests. But traffic on Monday and Tuesday was in total 3Tb. Normal days site traffic was 5-7Gb per day.
Here is 24h report:
Because that graph on CF is updated on each hour i was need some amost real-time so i was accessing almost in real-time server access logs. But i need something quicker. Then i found that CF have graph for DNS requests to site that is updated each hour:
And i saw that this was updated on every minute. So this helps me to see attackers - when site is on normal state DNS requests are around 10-20 per minute. When attackers start their job - minimum requests become 100 per minute and beyond that.
And here is WAF rules that i use:
Blocks all requests from single user agent
Block all requests with specific HTTP method and URL pattern
Block all requests without specific cookie
As you can see this isn't rocket science and technically i can do this on almost every other CDN. I deleted some sensitive information from that screenshots.
And now financial things... probably launching similar attack cost to hackers $20-$50 or even $100. There are few DDoS-as-a-Service services. And this could be relative small ammount of expenses to take down competitors before holidays. CF homes with few pricing plans, but this customer is on "forever free" plan. BUT BO was prepared at ANY time if things become worse to purchase CF.
And was almost full story. If you have any question don't hesitate to ask me.
So great to see technical SEO steppping in for dev-ops when needed but I would expect nothing less from you, Peter. This is just one drop in the ocean of next level work that you do!
I've seen a question around this type of attack impacting ranking - I'd say the question should be avout it impacting discoverability and indexing which can, in turn, impact ranking, traffic, conversion and so on.
From experience, a day, maximum two of 503 status codes are fine. 2 days might be pushing it though. 429s are also ok to limit crawls when needed but, again, would not recommend for long periods.
Any other 5xx status code, though, has the ability of getting your pages dropped from the index in a matter of days. Search engines will consider that the server issue is not temporary and drop your pages as they're getting crawled.
Can't remember but I think the default status code in Cloudfare is not 503 but 502 when dealing with such attacks so that's something to bear in mind if ever dealing with a DDoS.
Excellent share. As far as the relation to SEO, I'd be curious to know if there was any ranking decreases as a result. Although, I don't expect the brief period of server issues to have an impact.
hey @peter
interesting insight!
As you have seen, blocking user-agents is tedious and useless: the script kiddies can simply rotate user agents and they can take the list from the endless amount of webstats available online.
In my experience it has been much more useful to "soft block" ASNUM codes.
For the occasional reader: an ASNUM code identify a whole provider, even if that provider is spread over dozen of countries.
For example Amazon is AS14618: each every ip owned by amazon is classified under that label and it can be used as a source for Cloudflare WAF rules.
Now if we "block" that provider, every server on Amazon would not be able to access our domain.
Why "soft blocking" ?
I tend to use a "managed challenge" method because some user might use that provider as a proxy/vpn: if the user is employing a normal web client (eg: Chrome, Brave, Arc Browser) it will be allowed or in the worst case cloudflare will show a captcha.
How many users are bots on Amazon Aws ?
Let's ask an opinion to cloudflare. https://radar.cloudflare.com/as14618
Interesting but how many ASNUM can i block ?
how many you need.
Keep in mind that when you have blocked most of big names and some "scummy provider" you will have a LOT LESS issues.
It will be much more cost and time expensive to launch a DDOS attack!
Pay attention to google!
Don't block googlebot otherwise your SEO rankings will drop: your domain will give a status 403 on all the pages.
How do i get the ASNUM of the offenders?
And that's it :)
@Samuel Izu have a look
Hello! Good share.
Very insightful and thanks for sharing. I started noticing some unusual traffic spikes on my websites, but wasn't really sure of what's going on.
How can you implement this on bunny Cdn? I'd appreciate a few pointers.
Many thanks
Edit.
I read the initial post on my mobile so I could not really see the images well.
As a follow up to my initial post, while looking at the logs from my cdn, I noticed this particular user agent and figured out something wasn't right but I wasn't sure. ( see attached image)
Now seeing something similar has cleared my doubts because I have been having daily unusual high cpu usage that couldn't be accounted for.