سؤال

Our Sql Server DB has some TSQL logic that packs tons of decimal(19,6) into one big varbinary(max) - it helps to reduce size of DB, instead of thousands rows we have one row. In TSQL each decimal(19,6) is converted to varbinary(12) through simple cast as varbinary(12). Now, TSQL uses cursors(maybe inefficient...) to do such job and its slow. Our DBA suggests that using .NET CLR function would be much faster.

I have C# method that gets XML with plain decimal values.I'm stuck right now with the question how can I simulate SQL Server internal representation of decimal value.

What I have so far is:

DECLARE @x decimal(19,6) 
SELECT @x = 0
SELECT CAST(@x AS VARBINARY(12))
-- 0x13 06 00 01 00000000

SELECT @x = 1
SELECT CAST(@x AS VARBINARY(12))
--0x13 06 00 01 40420F00

SELECT @x = -1
SELECT CAST(@x AS VARBINARY(12))
--0x13 06 00 00 40420F00

SELECT @x = 1.1
SELECT CAST(@x AS VARBINARY(12))
--0x13 06 00 01 E0C81000

SELECT @x = 2
SELECT CAST(@x AS VARBINARY(12))
--0x13 06 00 01 80841E00

OK, 13 - is 19

06 - is 6 which is 19,6

00 - ??

00 or 01 is sign obviously

AND 80841E00 is two times larger than 40420F00.

How is decimal stored, is that internal representation version specific(because I didn't manage to successfully google it)?

P.S. Personally, I think that it's a bad idea to do such thing in C#

هل كانت مفيدة؟

المحلول

Didn't manage to figure it out. I've decided to change tactics a little bit. I'm packing decimal to bigint by multiplying it to million. And then unpack by dividing to million. bigint is stored in clear understandable way - just like in .NET.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top