Question

I have a batch processing system that aggregates the success counts. Many servers does the aggregation, all updating a single row in a table concurrently. I executing prepared statements using hibernate like this

SQL_UPDATE = "UPDATE STATS.COUNTS SET SUCCESSCOUNT = SUCCESSCOUNT + ? WHERE ID = ?"
update = session.connection().prepareStatement(SQL_UPDATE);
update.setLong(1, data.getSuccessCount());
update.setLong(2, data.getBatchId());
update.execute();

There were few concurrency issues and decided to use pessimistic locking since the chances of collision are pretty hight. How do I perform pessimistic locking when using prepared statements like this in hibernate.

Was it helpful?

Solution

If you are concerned about concurrency, you can change the transaction isolation level from default to SERIALIZABLE. This is the highest level of isolation. Highest isolation can increase the chances for deadlock and hence performance issues.

Again if it's a batch update, better not to have high level of isolation. You may consider evaluating batch technologies like Spring Batch.

OTHER TIPS

You didn't mention setting the isolation level of your transactions. Have you tried setting it using something like @Transactional(isolation=Isolation.SERIALIZABLE, propagation=Propagation.REQUIRED)

Otherwise, Hibernate session lock methods work on objects and does not apply to SQL DDLs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top