Вопрос

recently one of my server is getting targetting by a number of dos attack (thousands requests/min) by some chinese IP (these ip aren't always the same).

So at the start of my framework I made a little function to block after an ip if it has made too much requests.

function firewall() {
  $whitelist = array('someips');

  $ip = $_SERVER['REMOTE_ADDR'];

  if (in_array($ip,$whitelist))
    return null;

  if (search($ip,$pathToFileIpBanned))
    die('Your ip did too many requests')

  appendToFile($ip,$pathTofileIpLogger); //< When the file reaches 13000 bytes truncate it

  if (search($ip,$pathTofileIpLogger) > $maxRequestsAllowed)
     appendToFile($ip,$pathToFileIpBanned);   
}
  • Basically the script checks if the current ip is found in a file 'ipBlocked' if it's found it dies.
  • If it's not found it adds the current ip to a file logger 'ipLogger'.
  • After this it counts the occurences of the ip in the file ipLogger if they are > $max it's blocks this ip by adding the ip to the file ipBlocked

ATM is working.. it has banned some chinese/tw ips

The bottleneck of this script is the search function that must counts the occurences in a file of a string (the ip). For this reasons I am keeping low the file (the iplogger file is truncated as soon as it reaches 600-700 ips logged)

Of course to add ips to the file without having to worry about race condition i do it like this:

file_put_contents($file,$ip."\n",FILE_APPEND | LOCK_EX);

The only problem i am experiencing with is is the poeple behind NAT. they all have the same IP but their requests shouldn't be blocked

Это было полезно?

Решение

Some very basic file/serialize code, you could use as an example:

<?php
$ip = $_SERVER['REMOTE_ADDR'];

$ips = @unserialize(file_get_contents('%path/to/your/ipLoggerFile%'));
if (!is_array($ips)) {
  $ips = array();
}

if (!isset($ips[$ip])) {
  $ips[$ip] = 0;
}

$ips[$ip] += 1;
file_put_contents('%path/to/your/ipLoggerFile%', serialize($ips));

if ($ips[$ip] > $maxRequestsAllowed) {
  // return false or something
}

Of course, you'll have to integrate this in some way into your firewall function.

Другие советы

While this is stopping the requests before they do anything heavier, like db reads and the like, you might want to consider taking this down a level to the web server or even further to a software/hardware firewall.

The lower levels will deal with this far more graciously and with a lot less overhead. Remember by bringing up PHP they're still consuming one of your workers for a while.

Here are my few notes, hope you find them useful.

In my opinion the function firewall does too much and isn't very specific in it's name. It both handles the saving of ip/visits, ending the script or doing nothing. I'd expect the wall to be set aflame when calling this function.

I would go for an more object-orientated approach, where the firewall, isn't named firewall, but something like blacklist.

$oBlackList = new BlackList();

This object would be responsible for just the blacklist itself, but nothing more. It would be able to say if an ip address is on a blacklist, thus implement an function like:

$oBlackList = new BlackList();
if ($oBlackList->isListed($sIpAddress)) {
    // Do something, burn the intruder!
}

This way, you can be creative in the way you'd like to handle and you're not limited to the body of your function. You could expand the object with an function to add an address to the list. $oBlackList->addToList($sIpAddress); perhaps.

This way, the handling of the amount of visits, or the storage thereof isn't limited to your firewall body. You could implement database storage, file storage (as you use now) and switch anytime without invalidating your blacklist.

Anyway, just rambling!

You should create for every blocked IP a file. By that you can block the visitor through .htaccess as follows:

# redirect if ip has been banned
ErrorDocument 403 /
RewriteCond %{REQUEST_URI} !^/index\.php$
RewriteCond /usr/www/firewall/%{REMOTE_ADDR} -f
RewriteRule . - [F,L]

As you can see it only allows access to the index.php. By that you can do a simple file_exists() in the first line before heavy db requests are made and you can throw a IP unlocking captcha to avoid permanent blocking of false-positives. By that you have a better user experience compared to a simple hardware firewall that does not return any information or does not have a unlocking mechanism. Of course you could throw a simple HTML text file (with a php file as forms target) to avoid the PHP parser working as well.

Regarding DoS I don't think you should rely only on IP addresses as it would result to many false-positives. Or you have a 2nd level for whitelisting proxy ips. For example if a ip was unblocked multiple times. Some ideas to block unwanted requests:

  1. is it a human or crawler? (HTTP_USER_AGENT)
  2. if crawler, does it respect robots.txt?
  3. if human, is he accessing links that aren't visited through humans (like links that are made invisible through css or moved out of the visible range or forms ...)
  4. if crawler, what about a whitelist?
  5. if human, is he opening links like a human would? (example: in the footer of stackoverflow you will find tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback. No human will open 5 or more of them I think, but a bad crawler could = block its ip.

If you really want to rely on ip/min I suggest not to use LOCK_EX and only one file as it will result a bottleneck (as long the lock exists all other requests need to wait). You need a fallback file as long a LOCK exists. Example:

$i = 0;
$ip_dir = 'ipcheck/';
if (!file_exists($ip_dir) || !is_writeable($ip_dir)) {
    exit('ip chache not writeable!');
}
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
while (!($fp = @fopen($ip_file . '_' . $i, 'a')) && !flock($fp, LOCK_EX|LOCK_NB, $wouldblock) && $wouldblock) {
    $i++;
}
// by now we have an exclusive and race condition safe lock
fwrite($fp, time() . PHP_EOL);
fclose($fp);

This will result a file called 12.34.56.78_0 and if it hit a bottleneck it will create a new file called 12.34.56.78_1. Finally you only need to merge those files (respect the locks!) and check for to many requests in a given time period.

But now you are facing the next problem. You need to start a check for every request. Not really a good idea. A simple solution would be to use mt_rand(0, 10) == 0 before starting a check. An other solution is to check the filesize() so we do not need to open the file. This is possible because the filesize raises by every request. Or you check the filemtime(). If the last file change is done in the same second or only one second ago. P.S. Both functions are equal fast.

And by that I come to my final suggestion. Use only touch() and filemtime():

$ip_dir = 'ipcheck/';
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
// check if last request is one second ago
if (filemtime($ip_file) + 1 >= time()) {
    mkdir($ip_dir . $_SERVER['REMOTE_ADDR'] . '/');
    touch(microtime(true));
}
touch($ip_file);

Now you have a folder for every ip that could be a DoS attack containing the microtime of its request and if you think it contains to many of those requests you could block the ip by using touch('firewall/' . $_SERVER['REMOTE_ADDR']). Of course you should periodical clean up the whole thing.

My experiences (German) using such a firewall are very good.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top