Question

i am working with stateless session for batch job in my Play Framework 1.2.4 project.

i am inserting and updating rows quite good, but i dont know what to do when an exception occured. Heres my code:

try{
      statelesssession.insert(someobject);
   }
catch(ConstraintViolationException e)  //It happens from time to time dont ask me why..
   {
      ??????transaction.rollback();????? THATS MY CONCERN
   }
finally{
      transaction.commit();
   }

What i need to know is, i am committing data at every 100 inserts. i am wondering, if constraintviolation happens in i.e 56th record and transaction does the rollback, will i lose the other 55 records too?

if yes, what do i have to do in constraintviolationexception? or shall i commit in every 1 record to avoid this?

Was it helpful?

Solution

If you commit every 100 inserts, then a rollback after the 56th insert also undos all 55 inserts before.

You can commit after every insert, but in batches who insert really many rows that is slow and so not recommended.

The solution is using savepoints.

Setting a savepoint is relatively fast. It can be done after every insert. Setting a savepoint does not write any data into the database - you still have to commit later - but a rollback only is done until the last savepoint.

So in your example you commit every 100 (or whatever) rows (and after the last row for sure), and you set a savepoint after every row. When an error appears and you roll back the action, only the errorneous insert is undone, the others are not touched.

For a description see for example java.sql.Connection.setSavepoint, java.sql.Savepoint or here.

OTHER TIPS

if you rollback you will lose all previous records in the transaction as well. If you only want to lose the records with the constraint exceptions then you can hold the records of each batch in a list and switch to committing one by one when the batch bombs and keep on with the batches afterwards.

In this type of use case, you have another job that cut all your data in pieces of 100 objects and launch the subjob for these objects.

The best thing to do in this case for me is to throw the exception. Then the master job get this exception and all your 100 objects are rollback. Then the master job can then go into another mode for these object and relaunch the subjob per object. Then only the one that throws the exception won't be save.

This is typical handling of batch. If everything is ok, your batch is fast because you commit every 100 objects but in case of an error you fall back into single object commit so you just don't save objects that fails.

But as mericano1 said, the correct behavior in your case is a matter of business rule.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top