Domanda

I am running in the following scenario which gives me quite a bit of headaches as I can't find an exact explanation for the behavior I am seeing. I have the following declared:

struct test_struct
{
    long testv1;
    char testv2[51];
    long testv3;
};

and a corresponding table in Oracle 10g:

CREATE TABLE test_table
(
    testv1 NUMBER(10, 0),
    testv2 VARCHAR(50),
    testv3 NUMBER(4, 0)
);

To access data in this table I have a function:

bool getTestData(long test_var1, struct test_struct *outStruct)

and here I see the differences which I need to explain but can't. If the body of the function looks like this:

EXEC SQL BEGIN DECLARE SECTION;
    long testvar1_param = test_var1;
    struct test_struct *resStruct = outStruct;
EXEC SQL END DECLARE SECTION;

EXEC SQL SELECT testv1, testv2, testv3
    INTO :resStruct
    FROM test_table
    WHERE testv1 = :testvar1_param;

I get slower performance then if the body of the function looks like:

EXEC SQL BEGIN DECLARE SECTION;
    long testvar1_param = test_var1;
    long *testv1_res = &(outStruct->testv1);
    char *testv2_res = outStruct->testv2;
    long *testv3_res = &(outStruct->testv3);
EXEC SQL END DECLARE SECTION;

EXEC SQL SELECT testv1, testv2, testv3
    INTO :testv1_res, :testv2_res, :testv3_res
    FROM test_table
    WHERE testv1 = :testvar1_param;

The performance in the second is different by a significant margin.

Does anyone know what could explain this behavior?

È stato utile?

Soluzione

For performance problems that look unexplainable at first sight: turn on sql tracing including waits.

ALTER SESSION SET TRACEFILE_IDENTIFIER = "some_unique_identifier";
dbms_support.start_trace (binds=>true,waits=>true);

run your code, make it commit and disconnect gracefully. Don't use dbms_support.stop_trace because it might prevent the spooling of the rowsource operations. In the generated tracefile you will find the exact sql text as it is parsed, wait events that impacted the sql and the rowsource operations. The rowsource operations show how exactly the sql plan looked like while running the sql.

  • Check the number of parses
  • Check for bind variables being used or not.
  • Check the rowsource operations for the expected plan.

For your problem - having to fetch lots of rows one by one in a random fashion - I expect to find

  • 1 cursor declaration
  • 1 parse
  • a loop that opens/fetches/closes the cursor

It is very important for these scenarios not to parse every select. The parsing can take more time than the execution.

One question that remains is: why get all rows one by one? Is this some kind of data copy operation?

Altri suggerimenti

Are you accounting for the effect of caching? I assume not.

IF you run the first query timed, then run the second query timed, where the testvar1_param value is the same, the second completes in markedly different time. It won't matter which query gets run first, the second version will fare better.

This is because the where predicate is the same in both queries, and the data in the resultset is the same in both queries. Usually subsequent queries identical queries run much faster when you are going against an indexed query because you never go to the table to get the resultset, it comes from the SGA where it is cached.

Try using different values for testvar1_param and run 10 queries of each from with completely different parm values. They will be very very close in time.

You are using tkprof right?

By timing I mean (since it is development, right?)

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;

This improves what oracle gives you in tracing for performance.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top