Question

I'm trying to implement RFC 5683, which relies on a hashing function that is described as follows:

H1 = SHA-1(1|1|z) mod 2^128 | SHA-1(1|2|z) mod 2^128 |...| SHA-1(1|9|z) mod 2^128

Where z is the string to hash. The part I have trouble understanding is the following:

in order to create 1152 output bits for H1, nine calls to SHA-1 are made and the 128 least significant bits of each output are used.

Once I get the output from my hash function (I'm using SHA-256 instead of SHA1), how do I get the 128 "least significant bits" from that hash? The libraries I'm using are able to output as an array of 8 x 32-bit integers:

[-1563099236, 1891088516, -531757887, -2069381238, 131899433, -1500579251, 74960544, -956781525]

Or as a 64-character hexadecimal string:

 "a2d4ff9c70b7b884e04e04c184a7bf8a07dca029a68efa4d0477cea0c6f8ac2b"

But I'm at a loss as to how I would recover the least significant bits from these representations.

Was it helpful?

Solution

Given that hex string:

a2d4ff9c70b7b884e04e04c184a7bf8a 07dca029a68efa4d0477cea0c6f8ac2b
           most significant <-  | -> least significant

64 chars -> 256 bits, so 128 bits would be half the string. Least signifcant at the end of the string, most significant at the start of the string.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top