Question

In SQL Server 2005, we defined some UDT (User Defined Datatypes) on in particular was SK (for Surrogate Key). These were defined as 32 bit 'int'. And consequently the size was 4 bytes.

In SQL Server 2008, the UDT for the integer datatypes uses a different storage mechanism, depending upon the precision:

Storage Displays the maximum storage size for the UDT. Maximum storage sizes vary, based on precision.

Precision(digits).....Storage(bytes)

1 – 9........................5

10 – 19....................9

20 – 28...................13

29 – 38...................17

One consequence of this is that UDTs based on BOTH int and bigint will occupy 9 bytes! NOTE: native int and bigint datatypes still occupy 4 and 8 bytes respectively!

9 bytes seems pretty heavyweight for a surrogate key UDT!

Can anyone explain why this is so (in particular what the design rationale for this was)? How come this discrepancy between the UDT and native datatypes?

Are there any alternate approaches apart from NOT using UDTs?

Was it helpful?

Solution

Excuse me, but it seems to me you've got it wrong. Remember, "SELECT is not broken" and Microsoft won't modify such a critical piece of their engine without advertising it strongly due to conversion problems.

the table you're quoting comes from decimal and numeric storage in MSDN, which are basically more than int

If you're using a strict alias, Using special datatypes strongly hints that a int-based type takes four bytes and no more. If you're using a CLR type, there be dragons, or more overhead.

Anyway, You can verify the footprint of your datatypes by looking into sys.types

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top