Question

I have this table:

create table demo (
    key number(10) not null,
    type varchar2(3) not null,
    state varchar2(16) not null,
    ... lots more columns ...
)

and this index:

create index demo_x04 on demo(key, type, state);

When I run this query

select * from demo where key = 1 and type = '003' and state = 'NEW'

EXPLAIN PLAN shows that it does a full table scan. So I dropped the index and created it again. EXPLAIN PLAN still says full table scan. How can that be?

Some background: This is historical data, so what happens is that I look up a row with state CLEARED and insert a new row with state NEW (plus I copy a few values from the old row). The old row is then updated to USED. So the table always grows. What I did notice is that the cardinality of the index is 0 (despite the fact that I have thousands of different values). After recreating, the cardinality grew but the CBO didn't like the index any better.

The next morning, Oracle suddenly liked the index (probably slept over it) and started to use it but not for long. After a while, the processing dropped from 50 rows/s to 3 rows/s and I saw again "FULL TABLE SCAN". What is going on?

In my case, I need to process about a million rows. I commit the changes in batches of ca. 50. Is there some command which I should run after a commit to update/reorg the index or something like that?

I'm on Oracle 10g.

[EDIT] I have 969'491 distinct keys in this table, 3 types and 3 states.

Was it helpful?

Solution

What happens if you specify an index hint? Try this:

SELECT /*+ INDEX (demo demo_x04) */ * 
  FROM demo 
 WHERE key = 1 
   AND type = '003' 
   AND state = 'NEW';

It sounds like what happened overnight was that the table got analyzed. Then, as you ran your processing against the table, enough of the index was updated to cause oracle's table's statistics to go stale again and the optimizer stopped using the index.

Add the hint and see if EXPLAIN PLAN gives you a different plan and the query performs better.

Oh, and Tony's answer regarding analyzing the table is a general good practice, although with 10g the database is pretty good about doing self-maintenance in that regard. If your process is doing a lot of updates the index can go stale quickly. If running analyze when your process starts going in the ditch improves the situation for a while, you would then know this is the problem.

To update statistics for the table, use the dmbs_stats.gather_table_stats package.

For example:

exec dbms_stats.gather_table_stats('the owner','DEMO');

OTHER TIPS

Has the table been analyzed recently? If Oracle thinks it is very small it may not even consider using the index.

Try this:

select last_analyzed, num_rows 
from user_tables
where table_name = 'DEMO';

NUM_ROWS tells you how many rows Oracle thinks the table contains.

"The next morning, Oracle suddenly liked the index (probably slept over it)" Probably a DBMS_STATS is running overnight.

Generally I would see one of three reasons for a FULL TABLE SCAN over an index. The first is that the optimizer thinks the table is empty, or at least very small. I suspect this was the initial problem. In that case it would be quicker to full scan a table consisting of only a handful of blocks rather than use an index.

The second is when the query is such that an index cannot be practically used.

"select * from demo where key = 1 and type = '003' and state = 'NEW'"

Are you actually using literals hard-coded in the query. If not, your variable datatypes may be incorrect (eg key being character). That would require the numeric key be converted to character for comparison, which would make the index nearly useless.

The third reason is where it thinks the query will process a large proportion of the rows in the table. Type and State seem pretty low cardinality. Do you perhaps have a large number of a specific 'key' value ?

A comment on the processing you describe: it sounds like you are doing row-by-row processing with intermittent commits, and I'd urge you to rethink this if you can. The update/insert mechanism might well be converted to a MERGE statement and the entire data set can then be processed in a single statement with one commit at the end. This would almost certainly be faster and use less resources than your current method.

Is the value of the column key always 1? If so, I'm not sure that consulting the index would optimize the query, since each row would have to be examined anyway. If so, declare the index without the key column. You could also try:

select key, type, state from demo where key = 1 and type = '003' and state = 'NEW'

which (if my guess is right) would still need to look at each row, but which might go to the index since all columns in the result set are now covered.

I'm just guessing based on your statement that the index shows cardinality 0.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top