문제

I've recently had to work with ASN.1 Unaligned PER encoded data. I'm having a problem understanding how UPER does two's complement integer encoding in the SEQUENCE data type.

It seems to be flipping the most significant bit incorrectly (poor choice of words). For positive integers, the leading bit is 1 and for negative, it's 0. I assume that there's a method to the madness here but after a days work it seems I can't dig it out of the ITU-T standard nor can I figure it out on my own. I suspect it is because the INTEGER's are wrapped in the SEQUENCE type, but I don't understand why it would do this. I should point out that my understanding of ASN.1 is very limited.

A simple example, let's say I have the following schema

BEGIN
    FooBar ::= SEQUENCE {
      Foo INTEGER (-512..511),
      Bar INTEGER (-512..511)
    }
END

And I'm encoding the following, as Unaligned PER

test FooBar ::= 
{
   Foo 10,
   Bar -10 
}

Result of the encoding as hex and binary string and respectively expected values.

HEX:           0x829F60
BIN:           100000101001111101100000

EXPECTED HEX:  0x02BF60
EXPECTED BIN:  000000101011111101100000

Any ideas as to what's happening here?

도움이 되었습니까?

해결책

"Foo" and "Bar" should be lowercase.

Your impression that the most significant bit is "flipped" derives from the particular choice of minimum and maximum permitted values of foo and bar in your definition of FooBar.

The permitted value range of foo, in your definition above, is -512..511. In PER, the encoding of foo occupies 10 bits. The least permitted value (-512) is encoded as 0 (in 10 bits). The next permitted value (-511) is encoded as 1 (in 10 bits). And so on.

If you define FooBar2 in the following way

FooBar2 ::= SEQUENCE { foo2 INTEGER (1234..5678), bar2 INTEGER (1234..5678) }

foo2 will be encoded in 13 bits (just enough to hold a value between 0 and 4444=5678-1234), with the value 1234 being encoded as 0000000000000, the value 1235 being encoded as 0000000000001, and so on.

다른 팁

If you follow the rules in X.691, you will end up at 11.5.6 (from 13.2.2). This encodes these values, which are constrained whole numbers, as offsets from the lower bound, and therefore as positive values. So, 10 is encoded as 522 and -10 as 502 (decimal, respectively).

Edit: someone suggested a clarification on the calculations. Your lower bound is -512. Since 10 = -512 + 522, the offset encoded for 10 is 522. Similarly, since -10 = -512 + 502, the offset encoded for -10 is 502. These offsets are then encoded using 10 bits. Therefore, you end up with:

value  offset  encoded bits
-----  ------  ------------
   10     522    1000001010 (522 in binary)
  -10     502    0111110110 (502 in binary)
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top