Question

I have a server that is running on aws - it's load balanced to some ec2 instances that run node.js servers. The security groups are set up so that only the LB can hit them on the HTTP port.

I was tailing some log files, and saw a bunch (50 or so at a time, seemingly somewhat periodically) of requests to /manager/html - AFAIK this looks like an attempt to expose a vulnerability in my app or gain access to a database manager of some sort.

My questions are:

  • Am I being targeted or are these random crawlers? This is on a service that is not even launched yet, so it's definitely obscure. There's been a bit of press about the service, so it's feasible that a person would be aware of our domain, but this subdomain has not been made public.

  • Are there common conventions for not allowing these types of requests to hit my instances? Preferably, I'd be able to configure some sort of frequency or blacklist in my LB, and never have these types of requests hit an instance. Not sure how to detect malicious vs normal traffic though.

  • Should I be running a local proxy on my ec2 instances to avoid this type of thing? Are there any existing node.js solutions that can just refuse the requests at the app level? Is that a bad idea?

  • Bonus: If I were to log the origin of these requests, would that information be useful? Should I try to go rogue and hunt down the origin and send some hurt their way? Should I beeswithmachineguns the originating IP if it's a single origin? (I realize this is silly, but may inspire some fun answers).

Right now these requests are not effecting me, they get 401s or 404s, and it has virtually no impact on other clients. But if this were to go up in scale, what are my options?

Was it helpful?

Solution 2

We had faced similar issues in the past and we had taken some preventive measures to stop such attacks though it can't guarantee to stop them completely but it showed significant measures in the reduction of such attacks.

  1. http://uksysadmin.wordpress.com/2011/03/21/protecting-ssh-against-brute-force-attacks/
  2. http://www.prolexic.com/knowledge-center-white-paper-ddos-mitigation-incident-response-plan-playbook.html
  3. https://serverfault.com/questions/340307/how-can-i-prevent-a-ddos-attack-on-amazon-ec2

Hope this helps.

OTHER TIPS

There are too many random automated requests are being made, even I host a nodejs server, they try to use cgi and phpmyadmin/wordpress configs. You can just use basic rate limiting techniques (redis-throttle)[https://npmjs.org/package/redis-throttle] for both your NodeJS server and ssh fail2ban to protect yourself from simple DoS attacks.

Automatic requests cannot do harm unless NodeJS or the libraries you have as well known flaws, so you should be always input & security checking all over your server. You should not be worried if you coded well. (Don't dump errors to users, sanitize input etc.)

You can log your 401 and 404s for a week, and filter the most common ones via your LB. Hunting down the IPs and sources will not help you if you are not a hollywood producer or fighting terrorists, as yoır problem is not so imporant and most importantly these requests are mostly from botnets.

Consider running a proxy cache like Varnish in front of your app servers. Use it's VCL to allow access to only the URI you define and reject everything else, allow GET but block PUT and POST, etc... Can also be used to filter http response headers you return. This would let you mask your node.js server as apache for example. Many tuts out on the net to implement this.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top