Question

is there a solution for batch insert via hibernate in partitioned postgresql table? currently i'm getting an error like this...

ERROR org.hibernate.jdbc.AbstractBatcher - Exception executing batch:
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
   at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
   at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
   at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)....

i have found this link http://lists.jboss.org/pipermail/hibernate-dev/2007-October/002771.html but i can't find anywhere on the web is this problem solved or how it can be get around

Was it helpful?

Solution

You might want to try using a custom Batcher by setting the hibernate.jdbc.factory_class property. Making sure hibernate won't check the update count of batch operations might fix your problem, you can achieve that by making your custom Batcher extend the class BatchingBatcher, and then overriding the method doExecuteBatch(...) to look like:

    @Override
    protected void doExecuteBatch(PreparedStatement ps) throws SQLException, HibernateException {
        if ( batchSize == 0 ) {
            log.debug( "no batched statements to execute" );
        }
        else {
            if ( log.isDebugEnabled() ) {
                log.debug( "Executing batch size: " + batchSize );
            }

            try {
//              checkRowCounts( ps.executeBatch(), ps );
                ps.executeBatch();
            }
            catch (RuntimeException re) {
                log.error( "Exception executing batch: ", re );
                throw re;
            }
            finally {
                batchSize = 0;
            }

        }

    }

Note that the new method doesn't check the results of executing the prepared statements. Keep in mind that making this change might affect hibernate in some unexpected way (or maybe not).

OTHER TIPS

They say to use two triggers in a partitioned table or the @SQLInsert annotation here: http://www.redhat.com/f/pdf/jbw/jmlodgenski_940_scaling_hibernate.pdf pages 21-26 (it also mentions an @SQLInsert specifying a String method).

Here is an example with an after trigger to delete the extra row in the master: https://gist.github.com/copiousfreetime/59067

Appears if you can use RULES instead of triggers for the insert, then it can return the right number, but only with a single RULE without a WHERE statement.

ref1

ref2

ref3

another option may be to create a view that 'wraps' the partitioned table, then you return the NEW row out to indicate a successful row update, without accidentally adding an extra unwanted row to the master table.

create view tablename_view as select * from tablename; -- create trivial wrapping view

CREATE OR REPLACE FUNCTION partitioned_insert_trigger() -- partitioned insert trigger
RETURNS TRIGGER AS $$
BEGIN
   IF (NEW.partition_key>= 5500000000 AND
       NEW.partition_key <  6000000000) THEN
      INSERT INTO tablename_55_59 VALUES (NEW.*);
   ELSIF (NEW.partition_key >= 5000000000 AND
          NEW.partition_key <  5500000000) THEN
      INSERT INTO tablename_50_54 VALUES (NEW.*);
   ELSIF (NEW.partition_key >= 500000000 AND
          NEW.partition_key  <  1000000000) THEN
      INSERT INTO tablename_5_9 VALUES (NEW.*);
   ELSIF (NEW.partition_key >= 0 AND
          NEW.partition_key <  500000000) THEN
      INSERT INTO tablename_0_4 VALUES (NEW.*);
   ELSE
      RAISE EXCEPTION 'partition key is out of range.  Fix the trigger function';
   END IF;
   RETURN NEW; -- RETURN NEW in this case, typically you'd return NULL from this trigger, but for views we return NEW
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_view_trigger
   INSTEAD OF INSERT ON tablename_view
   FOR EACH ROW EXECUTE PROCEDURE partitioned_insert_trigger(); -- create "INSTEAD OF" trigger

ref: http://www.postgresql.org/docs/9.2/static/trigger-definition.html

If you went the view wrapper route one option is to also define trivial "instead of" triggers for delete and update, as well, then you can just use the name of the view table in place of your normal table in all transactions.

Another option that uses the view is to create an insert rule so that any inserts on the main table go to the view [which uses its trigger], ex (assuming you already have partitioned_insert_trigger and tablename_view and insert_view_trigger created as listed above)

create RULE use_right_inserter_tablename AS
      ON INSERT TO tablename
      DO INSTEAD insert into tablename_view VALUES (NEW.*);

Then it will use your new working view wrapper insert.

thnx! it did the trick, no problems poped up, so far :)....one thing thou... i had to implement BatcherFactory class and put it int the persistence.xml file, like this:

property name="hibernate.jdbc.factory_class" value="path.to.my.batcher.factory.implementation"

from that factory i've called my batcher implementation with the code above

ps hibernate core 3.2.6 GA

thanks once again

I faced the same problem while inserting documents through hibernate after lot of search found that it is expecting that updated rows should be returned so instead of null change it to new in trigger procedure which will resolve the problem as shown below

RETURN NEW

I found another solution for the same problem on this webpage:

This suggests the same solution that @rogerdpack said, changing the Return Null to Return NEW, and adding a new trigger that deletes the duplicated tuple in the master with the query:

DELETE FROM ONLY master_table;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top