質問

Background

I have a problem with a JPA cascading batch update that i need to implement. The update will take some 10000 objects and merge them into the database at once. The objects have an average depth of 5 objects and an average size of about 3 kb The persistance provider is Oracle Toplink

This eats a large amount of memory and takes several minutes to complete.

I have looked around and i see 3 possibilities:

Looping through a standard JPA merge statement and flushing at certain intervals

Using JPQL

Using Toplink's own API (Which i have no experience with whatsoever)

So i have a couple of questions

Will i reduce the overhead from the standard merge by using JPQL instead? If i understand correctly, merge causes to entire object tree to be cloned before being invoked. Is it actually faster? Is there some trick to speeding up the process?

How do i do a batch merge using to Toplink API?

And i know that this is subjective but: Does anyone have a best practice for doing large cascading batch updates in JPA/Toplink? Maybe something i didn't consider?

Related questions

Batch updates in JPA (Toplink)

Batch insert using JPA/Toplink

役に立ちましたか?

解決

Not sure what you mean by using JPQL? If you can express your update logic in terms of a JPQL update statement, it will be significantly more efficient to do so.

Definitely split your work into batches. Also ensure you are using batch writing and sequence pre-allocation.

See,

http://java-persistence-performance.blogspot.com/2011/06/how-to-improve-jpa-performance-by-1825.html

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top