Question

I am looping through a number of values (1 to 100 for example) and executing a prepared statement inside the loop.

Is there and advantage to using a transaction - committing after the loop ends - compared to a direct execution inside the loop?

The values are not dependant on each other so a transaction is not needed from that point of view.

Was it helpful?

Solution

If your queries are INSERTs, the page 7.2.19. Speed of INSERT Statements of the MySQL manual gives two interesting informations, depending on whether your are using a transactionnal engine or not :

When using a non-transactionnal engine :

To speed up INSERT operations that are performed with multiple statements for nontransactional tables, lock your tables.

This benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally, there would be as many index buffer flushes as there are INSERT statements. Explicit locking statements are not needed if you can insert all rows with a single INSERT.

And, with a transactionnal engine :

To obtain faster insertions for transactional tables, you should use START TRANSACTION and COMMIT instead of LOCK TABLES.

So I am guessing using transactions might be a good idea -- but I suppose that could depend on the load on your server, and whether there are multiple uses using the same table at the same moment, and all that...

There are more informations on the page I linked to, so don't hesitate to read it ;-)


And, if you are doing update statements :

Another way to get fast updates is to delay updates and then do many updates in a row later. Performing multiple updates together is much quicker than doing one at a time if you lock the table.

So, I'm guessing the same can be said than for inserts.


BTW : to be sure, you can try both solutions, benchmarking them with microtime, on the PHP side, for instance ;-)

OTHER TIPS

For a faster time you could do all the inserts in one shot, or group them together, perhaps 5 or 10 at a time, as if one insert fails the entire batch will.

http://www.desilva.biz/mysql/insert.html

A transaction will slow you down, so if you don't need it then don't use it.

A prepared statement would be a good choice though even if you did batch inserts, as you don't have to keep building up the query each time.

I faced the same question when I had to implement a CSV file (possibly quite long) data import (I know you can use the LOAD DATA INFILE syntax for that but I had to apply some processing on my fields before insertion).

So I made an experiment with transactions and a file with about 15k rows. The result is that if I insert all records inside one unique transaction, it takes only a few seconds and it's cpu bound. If I don't use any transaction at all, it takes several minutes and it's IO bounded. By committing every N rows, I got intermediate results.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top