Question

There was a situation of 108,368,168 executions along with 7,373s elapsed time for "update seq$ set increment$=:........" statement while loading massive(i.e. populating million rows) data in the table. It means - it definitely spending some fraction of the time during each call to acquire the latch to be able to update the data dictionary when a new set of values need to be generated.

I diagnose the situation and found that it was happening because of Sequence NOCACHE and started to increase its CACHE value until it's not included in the SQL ordered by CPU/Elepsed in AWR report, finally it's settled when CACHE value set to 100000. Now there is no query in AWR related to Sequence and concluded that CACHE value is given positive performance gain.

Can anyone tell me what are the items I can check in Oracle to confirm such a large sequence cache is not back fired in the database.

Was it helpful?

Solution

No it wont backfire. If there is confusion, you can read the manual to see the effect of caching. It means first time you select from sequence, it will cache next 999999 values in sga. It also means, for next 999999 sequence nextval calls you will not be updating seq$ table.

The downside of this is, if your instance or db crashes, those cached (and unused) values are lost. So if you need to have sequential values with no sequence loss, this will not be helpful. Along with caching, you perhaps want to use ORDER option to ensure sequences will be distributed in a ORDERED manner.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top