Question

Let's say I have this number i = -6884376. How do I refer to it as to an unsigned variable? Something like (unsigned long)i in C.

Was it helpful?

Solution

Assuming:

  1. You have 2's-complement representations in mind; and,
  2. By (unsigned long) you mean unsigned 32-bit integer,

then you just need to add 2**32 (or 1 << 32) to the negative value.

For example, apply this to -1:

>>> -1
-1
>>> _ + 2**32
4294967295L
>>> bin(_)
'0b11111111111111111111111111111111'

Assumption #1 means you want -1 to be viewed as a solid string of 1 bits, and assumption #2 means you want 32 of them.

Nobody but you can say what your hidden assumptions are, though. If, for example, you have 1's-complement representations in mind, then you need to apply the ~ prefix operator instead. Python integers work hard to give the illusion of using an infinitely wide 2's complement representation (like regular 2's complement, but with an infinite number of "sign bits").

And to duplicate what the platform C compiler does, you can use the ctypes module:

>>> import ctypes
>>> ctypes.c_ulong(-1)  # stuff Python's -1 into a C unsigned long
c_ulong(4294967295L)
>>> _.value
4294967295L

C's unsigned long happens to be 4 bytes on the box that ran this sample.

OTHER TIPS

To get the value equivalent to your C cast, just bitwise and with the appropriate mask. e.g. if unsigned long is 32 bit:

>>> i = -6884376
>>> i & 0xffffffff
4288082920

or if it is 64 bit:

>>> i & 0xffffffffffffffff
18446744073702667240

Do be aware though that although that gives you the value you would have in C, it is still a signed value, so any subsequent calculations may give a negative result and you'll have to continue to apply the mask to simulate a 32 or 64 bit calculation.

This works because although Python looks like it stores all numbers as sign and magnitude, the bitwise operations are defined as working on two's complement values. C stores integers in twos complement but with a fixed number of bits. Python bitwise operators act on twos complement values but as though they had an infinite number of bits: for positive numbers they extend leftwards to infinity with zeros, but negative numbers extend left with ones. The & operator will change that leftward string of ones into zeros and leave you with just the bits that would have fit into the C value.

Displaying the values in hex may make this clearer (and I rewrote to string of f's as an expression to show we are interested in either 32 or 64 bits):

>>> hex(i)
'-0x690c18'
>>> hex (i & ((1 << 32) - 1))
'0xff96f3e8'
>>> hex (i & ((1 << 64) - 1)
'0xffffffffff96f3e8L'

For a 32 bit value in C, positive numbers go up to 2147483647 (0x7fffffff), and negative numbers have the top bit set going from -1 (0xffffffff) down to -2147483648 (0x80000000). For values that fit entirely in the mask, we can reverse the process in Python by using a smaller mask to remove the sign bit and then subtracting the sign bit:

>>> u = i & ((1 << 32) - 1)
>>> (u & ((1 << 31) - 1)) - (u & (1 << 31))
-6884376

Or for the 64 bit version:

>>> u = 18446744073702667240
>>> (u & ((1 << 63) - 1)) - (u & (1 << 63))
-6884376

This inverse process will leave the value unchanged if the sign bit is 0, but obviously it isn't a true inverse because if you started with a value that wouldn't fit within the mask size then those bits are gone.

Python doesn't have builtin unsigned types. You can use mathematical operations to compute a new int representing the value you would get in C, but there is no "unsigned value" of a Python int. The Python int is an abstraction of an integer value, not a direct access to a fixed-byte-size integer.

Since version 3.2 :

def unsignedToSigned(n, byte_count): 
  return int.from_bytes(n.to_bytes(byte_count, 'little', signed=False), 'little', signed=True)

def signedToUnsigned(n, byte_count): 
  return int.from_bytes(n.to_bytes(byte_count, 'little', signed=True), 'little', signed=False)

output :

In [3]: unsignedToSigned(5, 1)
Out[3]: 5

In [4]: signedToUnsigned(5, 1)
Out[4]: 5

In [5]: unsignedToSigned(0xFF, 1)
Out[5]: -1

In [6]: signedToUnsigned(0xFF, 1)
---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 signedToUnsigned(0xFF, 1)

Input In [1], in signedToUnsigned(n, byte_count)
      4 def signedToUnsigned(n, byte_count): 
----> 5   return int.from_bytes(n.to_bytes(byte_count, 'little', signed=True), 'little', signed=False)

OverflowError: int too big to convert

In [7]: signedToUnsigned(-1, 1)
Out[7]: 255

Explanations : to/from_bytes convert to/from bytes, in 2's complement considering the number as one of size byte_count * 8 bits. In C/C++, chances are you should pass 4 or 8 as byte_count for respectively a 32 or 64 bit number (the int type). I first pack the input number in the format it is supposed to be from (using the signed argument to control signed/unsigned), then unpack to the format we would like it to have been from. And you get the result.

Note the Exception when trying to use fewer bytes than required to represent the number (In [6]). 0xFF is 255 which can't be represented using a C's char type (-128 ≤ n ≤ 127). This is preferable to any other behavior.

You could use the struct Python built-in library:

Encode:

import struct

i = -6884376
print('{0:b}'.format(i))

packed = struct.pack('>l', i)  # Packing a long number.
unpacked = struct.unpack('>L', packed)[0]  # Unpacking a packed long number to unsigned long
print(unpacked)
print('{0:b}'.format(unpacked))

Out:

-11010010000110000011000
4288082920
11111111100101101111001111101000

Decode:

dec_pack = struct.pack('>L', unpacked)  # Packing an unsigned long number.
dec_unpack = struct.unpack('>l', dec_pack)[0]  # Unpacking a packed unsigned long number to long (revert action).
print(dec_unpack)

Out:

-6884376

[NOTE]:

  • > is BigEndian operation.
  • l is long.
  • L is unsigned long.
  • In amd64 architecture int and long are 32bit, So you could use i and I instead of l and L respectively.

[UPDATE]

According to the @hl037_ comment, this approach works on int32 not int64 or int128 as I used long operation into struct.pack(). Nevertheless, in the case of int64, the written code would be changed simply using long long operand (q) in struct as follows:

Encode:

i = 9223372036854775807  # the largest int64 number
packed = struct.pack('>q', i)  # Packing an int64 number
unpacked = struct.unpack('>Q', packed)[0]  # Unpacking signed to unsigned
print(unpacked)
print('{0:b}'.format(unpacked))

Out:

9223372036854775807
111111111111111111111111111111111111111111111111111111111111111

Next, follow the same way for the decoding stage. As well as this, keep in mind q is long long integer — 8byte and Q is unsigned long long

But in the case of int128, the situation is slightly different as there is no 16-byte operand for struct.pack(). Therefore, you should split your number into two int64.

Here's how it should be:

i = 10000000000000000000000000000000000000  # an int128 number
print(len('{0:b}'.format(i)))
max_int64 = 0xFFFFFFFFFFFFFFFF
packed = struct.pack('>qq', (i >> 64) & max_int64, i & max_int64)
a, b = struct.unpack('>QQ', packed)
unpacked = (a << 64) | b
print(unpacked)
print('{0:b}'.format(unpacked))

Out:

123
10000000000000000000000000000000000000
111100001011110111000010000110101011101101001000110110110010000000011110100001101101010000000000000000000000000000000000000

just use abs for converting unsigned to signed in python

 a=-12
b=abs(a)
print(b)

Output: 12

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top