Question

When a dataset is saved in the software I use, a http request is triggered (fire-and-forget). Information like the timestamp, the user and the changes made are listed in the GET parameters.

Now I need to write an application which is able to take thousands of requests per minute and write them into a MySQL database.

I think an Apache web server with PHP is too slow to handle that.

What do I use instead? What about a Java web server? Can someone recommend a solution for this?

Was it helpful?

Solution

The first 3 things you need to do is profile, profile, and of course, profile.

You are trying to fix a problem that you don't know you actually have. While its good to try to anticipate problems, it takes a lot of experience with databases to be able to really 'sense' them accurately.

Set up a test script - it sounds like it should be only a few lines of code - and insert a few million entries into your log table the way your application would. See how long it takes. If the timing is acceptable... you don't have to do anything.

If the timing is too slow, now start optimizing. Things to look at:

  • Use ISAM as opposed to InnoDB for the table's storage engine.
  • Make sure you don't have a lot of indexes on the table, that slows down writes.
  • Or perhaps try a MEMORY table.
  • Try using INSERT DELAYED which will hold the inserts until the table is writable, but not lock your application.

If you find that is too slow, then its time to start considering other options.

Licensed under: CC-BY-SA with attribution
scroll top