Attacked by an Amazon EC2 hosted botnet.

One of my servers was struck by a password brute forcing attack from hundreds of IPs.  This was initially mitigated by adding a new fail2ban rule which is my normal practice.  This attack carried on over the weekend and I was happy to see server load back to normal.  By around 6pm on Sunday I wasn’t seeing any more entries in the access_ssl_log for the targeted URL.   However at around 10pm on Sunday night a huge batch of new IPs were being added to the attack at an extremely fast rate.  Something had changed…

This is a simplified story of what happened and what I did.  I am missing out lots of minor details and step that don’t change the outcome.

In this new batch of IP’s I noticed that quite a lot of them started with a 3.x.x.x  so did a whois on it.  This showed me that Amazon owned the entire bottom half of that range.  This got me curious and few further whois run on other ip addresses showed that in each case it was Amazon EC2 instances.  I wanted to see if this was across the board.

To save me having to turn on reverse lookups in my nginx/apache logs I ran this command to get all the fail2ban blocked hostnames.

iptables -L --line-numbers

Browsing through the more than 7,000 hostnames in this showed me that the vast majority were EC2 instances.  A little bit of coding later I had compiled a log file for Amazon and reported it.

At this point there were close to 8,000 ip addresses hitting my server at a rate of 15K per minute and rising.  The server load was rising as was memory usage.

I needed to do something quickly.  In this case I renamed the folder that was being attacked from /admin/ to /admin213/ and modified the application in question.  I then added a simple config to nginx to return a denied on the original /admin/ url.

location ^~ /admin/ {
    deny all;
}

Please note the above was added in a vhost container not a global one.  This meant that any host accessing this URL would get an immediate 403 and this would not require any back end processing or php-fpm loading.  I also cut down the fail2ban trigger from 3 attempts to 1.

Once this was done the server load and memory usage dropped down to tolerable levels and page loads returned to normal.

In the meantime, Amazon kept responding to my emails with requests for the same information I had submitted in my first email to them.  This it turns out was my fault.  They do not accept text file attachments.  All information has to be in the body of the email.  However, This is where things do become Amazons fault.

They obviously hadn’t read the ticket, nor looked seriously at the log files provided and gave me this glorious response.

Thank you for your response.

I can confirm that we have received your report and we are currently investigating.

We've determined that an Amazon EC2 instance was running at the IP address you provided in your abuse report. We have reached out to our customer to determine the nature and cause of this activity or content in your report.

We will investigate your complaint to determine what additional actions, if any, need to be taken in this case. We may notify you during our investigation if our customer requires more information from you to complete their troubleshooting of the issue. Our customer may reply stating that the activity or content is expected and instructions on how to prevent the activity or manually remove the content, as well.

They didn’t appear to grasp that this was not a single instance attacking me.  So I generated a list of all the ec2 hostnames that were currently attacking me and let them know that they either had 7,356 compromised ec2 instances (at the time of email) OR they had a client who was using 7,356 of their own ec2 instances to launch attacks on others.

Since I sent that email some 5 hours ago it has been radio silence since then.  I am not prepared to continue nursing the server and monitoring to ensure fail2ban is operating correctly so I needed a quick and easy fix.  A bit of googling to see if there is a list anywhere of the full ec2 ip address ranges gave me this page.

https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

Thankfully Amazon do provide a list of all their IP addresses in JSON format.  Now all I needed to do was find out which ones were EC2 as I didn’t want to block any others such as email servers etc.

A bit of research later and this is command that does the job for you.

curl https://ip-ranges.amazonaws.com/ip-ranges.json | jq -r '.prefixes[] | select(.service=="EC2") | .ip_prefix' >amazon-aws-ec2-ip-list.txt

I now had a file with all the ip ranges listed in a useful format.  15 seconds of editing in sublime edit and I have a script to create the rules in my firewall table in this format.

iptables -I amazon-ec2 -s 3.5.140.0/22 -j DROP -w

Now it is just the usual task of create the chain, linking it and then inserting all the rules.

For those not familiar with iptables this is how I do it.

Create the chain

iptables -N amazon-ec2
iptables -I INPUT -j amazon-ec2

Append a RETURN for stats purposes,  this adds an entry right at the end of the chain which allows you to see at a glance the total pkts and bytes.

iptable -A amazon-ec2 RETURN

Then add each ip using this format.  Please note I use insert here to keep the previous RETURN at the bottom.

iptables -I amazon-ec2 -s <IP> -j DROP -w

This is the problem mitigated but not resolved as far as I am concerned.  I will be adding this to many of the servers I manage just in case.  But Amazon really does need to take action as this is not an insignificant botnet.  I will continue to chase them up until they handle this to my satisfaction and the attacks stop.

Final thoughts…

I went back over historical log files and extracted only the hits for this domain on the url /admin/ with a POST.   Looking through the logs this attack had been building up for around a week using IP addresses that followed no pattern.  It was on Friday I finally paid attention due to getting a notification from my monitoring system about server load heading into warning territory.  I set the warning level fairly low to give me early warning of an impending or potential issue.  In this case it allowed me to get the fail2ban rule in place fairly quickly.

It took a while for the fail2ban triggers to take effect as the attacker was cautious to only use each ip once in any period.  I had to increase the log scan period from 10 minutes to 1 hour to start catching him. At night time when I knew the client wouldn’t be using the /admin/ url I turned down the trigger from 3 to 1, then in the morning back up to 3.  This accelerated the fail2ban triggering and over the next 24 hours or so the attack was mitigated pretty much 100%.

I think this annoyed the attacker and he escalated to the Amazon EC2 attack which was a couple orders of magnitude bigger and immediately caused the server to run low on resources and grind to an almost halt.  Now either the attacker has an AWS account (or many) probably paid for with a stolen card, or he has exploited 8k+ instances.  Either way he has put a lot of resources into this relative to the potential reward.  I have no idea why he targeted this client of mine and at no point was I concerned about a compromise as all passwords on my servers are a minimum of 12 characters and are generated randomly and never re-used.  So the pretty big inconvenience to me at little risk of any reward for them was a little baffling to say the least.

I am only continuing to pester Amazon now because I want this attacker to be inconvenienced by the loss of resources and to potentially protect other server owners who may not be as experienced as me.

Well that is the story.  If anything else happens I will either update this post or make a fresh post if it is significant.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.