Question

Right now the process that we're using for inserting sets of records is something like this:

(and note that "set of records" means something like a person's record along with their addresses, phone numbers, or any other joined tables).

  1. Start a transaction.
  2. Insert a set of records that are related.
  3. Commit if everything was successful, roll back otherwise.
  4. Go back to step 1 for the next set of records.

Should we be doing something more like this?

  1. Start a transaction at the beginning of the script
  2. Start a save point for each set of records.
  3. Insert a set of related records.
  4. Roll back to the savepoint if there is an error, go on if everything is successful.
  5. Commit the transaction at the beginning of the script.

After having some issues with ORA-01555 and reading a few Ask Tom articles (like this one), I'm thinking about trying out the second process. Of course, as Tom points out, starting a new transaction is something that should be defined by business needs. Is the second process worth trying out, or is it a bad idea?

Was it helpful?

Solution

A transaction should be a meaningful Unit Of Work. But what constitutes a Unit Of Work depends upon context. In an OLTP system a Unit Of Work would be a single Person, along with their address information, etc. But it sounds as if you are implementing some form of batch processing, which is loading lots of Persons.

If you are having problems with ORA-1555 it is almost certainly because you are have a long running query supplying data which is being updated by other transactions. Committing inside your loop contributes to the cyclical use of UNDO segments, and so will tend to increase the likelihood that the segments you are relying on to provide read consistency will have been reused. So, not doing that is probably a good idea.

Whether using SAVEPOINTs is the solution is a different matter. I'm not sure what advantage that would give you in your situation. As you are working with Oracle10g perhaps you should consider using bulk DML error logging instead.

Alternatively you might wish to rewrite the driving query so that it works with smaller chunks of data. Without knowing more about the specifics of your process I can't give specific advice. But in general, instead of opening one cursor for 10000 records it might be better to open it twenty times for 500 rows a pop. The other thing to consider is whether the insertion process can be made more efficient, say by using bulk collection and FORALL.

OTHER TIPS

Some thoughts...

  1. Seems to me one of the points of the asktom link was to size your rollback/undo appropriately to avoid the 1555's. Is there some reason this is not possible? As he points out, it's far cheaper to buy disk than it is to write/maintain code to handle getting around rollback limitations (although I had to do a double-take after reading the $250 pricetag for a 36Gb drive - that thread started in 2002! Good illustration of Moore's Law!)
  2. This link (Burleson) shows one possible issue with savepoints.
  3. Is your transaction in actuality steps 2,3, and 5 in your second scenario? If so, that's what I'd do - commit each transaction. Sounds a bit to me like scenario 1 is a collection of transactions rolled into one?
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top