Question

I would like to use global temporary tables to store some expensive intermediary data. The data is transient, but good for the duration of the php session so it seems that using global temporary tables with on commit preserve rows would be ideal.

But.. it looks like global temporary table data is only available to the oracle session that created it.

So this raises the issue of how could I ensure that oci_pconnect would get back the same oracle session, as I have read that the oracle sessions are unique per persistent connection? I'm not interested in using transactions across multiple php executions, just the temporary tables.

I am using php sessions so that could be used as an identifier for selecting the oracle session.

So far it seems that this just isn't possible, but it won't hurt to ask.

EDIT: The intended purpose behind implementing this is to speed access to user group membership information utilized in my access control.
Parsing the group membership ahead of time and using this temporary data instead eliminates 3 + layers of joins in all subsequent queries. As the returned keys are RAW, serializing them to external storage results in many calls to HEXTORAW() on usage again, and does not seem to help with the intended purpose.

The portion of the query added to determine group level access is static for the duration of the session and run by itself returns approximately 600 rows of unique, 16 byte RAW keys. These keys are then joined against the results set via a links table to determine if the user has any 'group level' privileges to the results set.

I played with using IN and passing the keys in as a string, but as they are RAW keys I then have to call HEXTORAW() 600 times per query. The performance wasn't nearly as good as using a temporary table and doing a JOIN.

Is there any other way to tell Oracle to keep the result of that portion of the query cached short of writing them to a 'permanent' intermediary result?

Était-ce utile?

La solution

Although you may be able to come up with some trickery to make this work, at least some of the time, I suggest that this is something that will almost certainly cause issues at some point, especially when making the transition from development server to production server. Options might include:

  1. Use a permanent table, and purge the data when you're logically done with it.
  2. Write the data to a flat file and then read it back it when needed.
  3. Write the data to a flat file and then mount the file as an external table.

Share and enjoy.

Autres conseils

How about 'database resident connection pooling'. I guess this is what you're looking for. Give it a shot!

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top