Question

I'm in a rather peculiar situation right now. To make a long story short, I'm part of a (real life) volunteer organization of about 2000 members. Our current website was built and maintained by a member who is no longer part of the organization (he quit). Unfortunately, he was the only one that actually had access to the server, and hasn't been cooperative in handing over the reigns to someone else after he left. As a result, myself and a small team of people have been working on creating a new website for ourselves from scratch. The data on the original website would be awesome to have for the new one, so without direct access to the database we have been screen-scraping what we need.

Which brings me to my current conundrum. The screen-scraping script I was using was being really slow, so I had the brilliant (not) idea of parallelizing it. I assumed the bottleneck was caused by my slow internet, so I foolishly decided to run 250 threads at once. After I tried that, the web server mysteriously went down and hasn't gone up since (it's been about 30 minutes now).

I'm not any kind of hacker or security expert, but I'm pretty sure I just accidentally caused a Denial of Service attack on the server. Which brings me to my question - assuming the owner of the website does nothing to help us, will the server come back to life of its own accord? (it's being hosted by Django on Linode if that matters). How do websites typically recover from DoS attacks? Have I potentially misdiagnosed what's going on, and could there be an alternative explanation? Or is the website lost forever?

Edit: All 250 of the requests were simple http requests going to pages within the Django admin panel if that changes anything.

Was it helpful?

Solution

More than likely the system is not truly down for good, unless the guy might have gotten pissed or the hosting provider, etc disabled it due to the traffic load. But there are a number of things to think of. But 250 connections isn't that much load, even for a shared hosting account, unless you were just flooding the server with requests.

Depending on what technology is used, there are a number of things that "could" have happened.

  1. You could have simply hit throttling limits on the webserver side for queuing, etc, that might need the application to restart. This could be automatic after a period of time or need intervention from the hosting provider.
  2. You could have overloaded the application and had it use too much memory where it was forcefully shut down. Some hosting providers will do this, but typiclaly provider for small windows of time and will allow the application to start back up. (Give it an hour or so)
  3. You could have pushed it over the monthly limit for bandwidth, in that case, it could be down until the next billing cycle...

WIthout knowing the hosting provider or environment these are just guesses.

I would strongly recommend though turning off your scraper!

OTHER TIPS

You should stop your screen-scraping software if you have not already.

Depending on what part of the system is down (Either the database, the server, the network or all), there is a chance it will recover by itself when the loads come back down.

If your application cannot sustain 250 simultaneous connections, you will want to investigate why. The culprit is usually database load (no indexes, un-optimized queries).

Linode could also have restrictions in place to limit how much bandwidth can be used within a certain period of time. You should probably contact them (or whoever is in charge).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top