Question

PostgeSQL has some ridiculously ample field and row size limits (1GB and 1.6TB respectively).

Sybase ASE on the other hand has row size limits in the neighborhood of 4 or 8KB.

E.g. on a Sybase ASE 15.7 I see:

sqldev.mydatabase.1> create table foo2(a VARCHAR(7000));
Warning: Row size (7034 bytes) could exceed row size limit, which is 1962 bytes

This isn't just a warning about performance as it is actually enforced when I try to insert something larger than 1962 bytes:

Attempt to update or insert row failed because resultant row of size 4462 bytes is larger than the maximum size (1962 bytes) allowed for this table.
Command has been aborted.

I guess that similar limits apply to SQL Server on account of common heritage and Google seems to agree (but I haven't tried that on SQL Server).

I hypothesize that this huge disparity must be the result of some widely different architectural decisions or some fundamental trade-off that was decided different ways with each RDBMS. If this is right then what trade-off or architectural decision would that be?

Was it helpful?

Solution

As far as SQL Server is concerned:

I'm not sure what you mean - you can store plenty of LOB data in a varchar(max) / nvarchar(max) / varbinary(max) column (up to 2GB).

You seem to be stuck on page size - yes a single page is limited to 8kb, and yes you are limited to 8060 bytes for non-LOB data, but no you can put plenty more than that into a table - it just can't all be held on a single page. Here is a table with one column, having one row, and the data is 136kb (216kb reserved):

CREATE TABLE #x(a varchar(max));
INSERT #x SELECT REPLICATE(CONVERT(varchar(max), 'a'), 100000);
SELECT DATALENGTH(a) FROM #x; -- 100,000    
EXEC tempdb.sys.sp_spaceused @objname = N'#x'; 
     -- 1 row, 216kb reserved, 136kb data size
DROP TABLE #x;

You can of course go much bigger:

  • Change the REPLICATE command to 10,000,000 and you get the same 1 row with 10,016kb of data (10,088kb reserved).
  • 100 million? 99kb
  • A billion? Takes a little longer, but still demonstrable... 997k.

From Books Online:

SQL Server supports row-overflow storage which enables variable length columns to be pushed off-row. Only a 24-byte root is stored in the main record for variable length columns pushed out of row; because of this, the effective row limit is higher than in previous releases of SQL Server. For more information, see the "Row-Overflow Data Exceeding 8 KB" topic in SQL Server Books Online.

Some of this changes with Columnstore (which uses different storage) and In-Memory OLTP (which still has limits, but Table and Row Size in Memory-Optimized Table might be good to read up on if you are (or might be) using In-Memory OLTP).

I don't know enough about PostgreSQL to comment on that platform, but as this was framed as a limitation in SQL Server, I felt the need to defend it (even if I can't defend Sybase similarly - it is not unlike SQL Server, but I confess I don't know the specifics).

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top