Question

Row fragmentation was always a thing you needed to think about when choosing char/varchar2 data types for table columns, issuing batch deletes/inserts/updates on the tables because Oracle tries to fit new data into opened free spaces and so fragmentation might start to slow down the performance at some time. Now everything has changed with the flash storage disks, because the data is written as much fragmented as possible due to flash technology itself. If we don't need take care about fragmentation then it breaks the whole understanding about table data storage issues and data fragmentation. Does anyone have experience with storing database files on flash storage disks? Does the fragmentation issue is gone with ssd disks?

Was it helpful?

Solution

There is no such thing as "row fragmentation" as you describe it and, realistically, that should never drive your choice of char or varchar2 data types. Your choice of data type should depend on the nature of the data and whether it is really fixed width or variable width. 99.9% of the time, you should prefer varchar2.

The smallest unit of I/O Oracle can possibly read or write is a block. A block is generally 8k (though it can be as small as 2k or as large as 32k). A block will generally store data for multiple rows. Since Oracle has to write the entire block every time, it doesn't matter if it has to move data around within a block.

Within a block, Oracle reserves a certain amount of space for future growth. This is controlled by the PCTFREE setting of the table. If you expect that your rows will grow substantially over time, you'd use a large PCTFREE. If you expect that your rows will be static in size over time, you'd use a small PCTFREE. You wouldn't want to adjust your data types to prevent rows from changing in size, you'd want to adjust the table's PCTFREE to be appropriate for whatever changes you expect.

If Oracle runs out of space on a block for a particular row (for example, if the row needs to grow and the PCTFREE was set too small), Oracle needs to migrate the row to a new block. That means that it leaves a pointer in the original block that points to the new block and moves the actual data to the new block. This can create performance issues since you now have to visit the old block and the new block to read the row if you're reading from the index depending on what fraction of the rows in the table are migrated. You can also get issues with chained rows if you have rows that are larger than your blocks or rows that have more than 255 columns which force Oracle to do additional I/O but those don't seem to be what you're concerned about here.

Regardless of the storage system, you want to set the PCTFREE of your table appropriately so that you minimize the amount of row migration that takes place over time (there are other ways to minimize row migration in some corner cases but 99% of the time you really just want to set the PCTFREE correctly). Use the appropriate data types for the data that you're trying to store, don't let concern about row migration influence your choice of data types.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top