I have a data source set up on my AS400 (iSeries) and when Cognos accesses it through the client access ODBC driver, it locks the files on the AS400. Even if the report closes the files remain locked for a length of time. This causes issues with updating the datasource, reorganizing files, clearing records, etc. There has to be a way of forcing the ODBC driver to remove the lock when the data has been retrieved...or at the very least, monitor the time that it keeps its hold. Any direction would be greatly appreciated.

Thanks.

Cognos 10.1.0.... iSeries V7R1M0

Buck, Thanks for taking the time to comment....however, I assure you my iSeries is in fact running V7R1M0, and I never said that I had a record lock. I said a file remains locked. I'm pretty sure that my question does pose a specific scenario in which Cognos accesses the AS400 Files through the Client Access ODBC Driver, and locks the file. Then holds the lock for an certain amount of time. My question was if there is a way to stop Cognos from keeping that lock on the file. I can provide error messages for random file accessing on the iSeries after this lock has taken place, but as I was looking for a way to relieve the lock before those errors occurred, I didn't see its relevance...but I am sure I would receive a CPF3203 error telling me that it cannot allocate the object.

有帮助吗?

解决方案

IBM i 7.1 does not run on AS/400 or iSeries hardware. This isn't a semantic nit: searching the web for ODBC and AS400 will return ancient and non-useful results.

The question doesn't include a specific error message, nor a specific scenario where that error occurs. I'm guessing that you're seeing an object lock (not a record lock) on CLRPFM. If so, the root cause is that the database manager doesn't completely close cursors; it soft closes them (sometimes called a pseudo close) for performance reasons.

If you have a non-SQL process that needs an exclusive lock on the table (like SAVOBJ, CLRPFM, DLTF, RGZPFM, etc.) then you can include an ALCOBJ command with the parameters set to force close any pseudo closed cursors. ALCOBJ OBJ((SOMSCHEMA/SOMETABLE *FILE *EXCL)) WAIT(1) CONFLICT(*RQSRLS)

If I guessed wrong, please edit the question to show the action being taken on the IBM i side that is throwing the error, and the exact error message ID of the error.

EDIT: Better explain locking

Any ODBC access, any RPG record level access, any process that opens a table for input results in a lock on the table. If the I/O request is for read-only, the lock level is *SHRRD (shared for read). If the I/O request involved update, the lock level is *SHRUPD (shared for update). This is normal and desirable behaviour and cannot be disabled because it happens deep in the operating system and is in the DNA of IBM i.

These object locks allow for shared access; if your Cognos process has a *SHRUPD lock on a table, then my RPG program can still open, read, write and update records in the same table, at the same time. This is how all modern databases operate.

Typically, when questions like this arise, there is a server process that demands an exclusive (*EXCL) lock on the table. Typical server side operations are CLRPFM, RGZPFM, SAVOBJ. If a table is opened by Cognos with a *SHRUPD lock (WRKOBJLCK will tell you this), a server side process like CLRPFM cannot obtain a *EXCL lock and will issue a CPF3203 - Cannot allocate object for file.

The piece that sometimes gets lost is the pseudo close. A typical ODBC process might look like this:

  • ODBC connect
  • Open cursor
  • fetch result set
  • close cursor
  • ODBC disconnect

On the DB2 side, one would expect that when the 'close cursor' step occurs, the *SHRUPD lock is released. That does not necessarily happen because DB2 performs a pseudo close, leaving the cursor in memory for the next time Cognos does the very same access (except say, with a different customer). For most operations, this is not a problem - who needs a *EXCL lock on a table during the day when Cognos can request access at any moment? But our legacy isn't so simple, and most of us still have server side processes that do a quick CLRPFM to clear out a batch from a work table. And that's where the CPF3203 occurs.

Since the database manager is holding the lock (not the Cognos process, which disconnected!) we need to tell DB2 that we want to have a hard close performed before we do a CLRPFM. The ALCOBJ CONFLICT(*RQSRLS) is how we do that. This would need to be added in the CL program doing the CLRPFM. Another way around this is to use SQL to clear the table. On 7.1, we can issue SQL commands in a CL program so instead of CLRPFM TEMPFILE we can do RUNSQL 'delete from tempfile' - being SQL, it doesn't require a *EXCL lock on the table.

EDIT: RGZPFM

Some background on RGZPFM is probably in order. Very old applications did not delete records in tables: they set a 'deleted record' byte that the application needed to check. Over time, the tables accumulated increasing numbers of records tagged as deleted or inactive. Disk was expensive, these records were superfluous, so the reorg was born. Typically it was a two step process: copy the file to a temporary copy, then copy it back, omitting the 'deletes'. This worked OK because the reorgs typically ran at night as part of the end of day processing, when interactive users were off the system. An additional benefit was putting the records physically in primary key order; this increased performance for programs reading by key.

More modern applications were able to issue an actual, physical DELETE operation on the row, but that row still took up space. Disk was still expensive, and so RGZPFM was born. RGZPFM still needed exclusive access to the table, for the same reasons as the two-step CPYF did. Many of these RGZPFM processes were simply inherited without considering the necessity of an actual reorg on newer (faster) hardware with much more disk. Some applications perform a reorg because 'that's how we always did it.'

It's 2014, and the hardware is really fast and disk is quite cheap. Tables can be created to reuse deleted records - this is the default for tables created with CREATE TABLE, and with very few exceptions works without a performance issue - and performance was a major reason for RGZPFM. There is still a place for RGZPFM, and that's for use with large tables with many deleted records - a sparse table. If you have an SQL SELECT that results in a full table scan, that's going to be a performance hit. Generally speaking, there aren't many of those tables; if you have a multi-million row table, it's probably a history file and doesn't see a lot of DELETE activity. But it's something to think about for those few situations where a sparse table has millions of rows AND no goot index.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top