Question

I have big number, time (micro seconds) stored in two 32bit variables. I need a help, how to change micro seconds time into millisecond, so I can store result of difference in 32bit number.

More details: I have one time in two 32bit variables. Where one variable have more significant bits and other have less significant bits. This time have microseconds resolution so I want to change it to milliseconds. So how to divide number that is stored in two variables.

Was it helpful?

Solution

If you don't have a 64-bit type, you can do it like the following:

uint32_t higher, lower; // your input

lower /= 1000;
lower += (higher % 1000) * 4294967L; // approximate 2^32 / 1000
higher /= 1000;

If the result fitted in lower itself, higher should be 0.

Just note that as @Mikhail pointed out, this solution is approximate, and has an error of 0.296 * higher + 2 ms (unless I'm missing something).


If you really want a better precision and don't care about efficiency, you can use a bit of floating-point arithmetic in the middle, and round the results correctly. I doubt if it's worth the effort:

uint32_t higher, lower; // your input

// simpler without a helper variable
if (lower % 1000 >= 500)
{
    lower /= 1000;
    ++lower;
}
else
    lower /= 1000;

lower += round((higher % 1000) * 4294967.296); // 2^32 / 1000
higher /= 1000;

You'll need to include <cmath> for round().

As a note, @Mikhail's solution in this case is probably better and may be faster. Though it's too complex for me.


If you have a 64-bit type, you can convert the split value to it:

uint64_t whole_number = higher;
whole_number <<= 32;
whole_number |= lower;

And then you can use whole_number as usual.


Note that if you only need a difference, it will be faster to subtract the values before actually dividing.

Assuming that you know which value is bigger:

uint32_t higher1, lower1; // smaller value
uint32_t higher2, lower2; // bigger value

uint32_t del_high = higher2 - higher1;
uint32_t del_low = lower2 - lower1;

if (lower2 < lower1)
    --del_high;

And now you can convert the result like explained before. Or with a bit luck, del_high will be 0 (if the difference is smaller than 2^32 μs), and you will have the result in del_low (in μs).

OTHER TIPS

The simplest way is to use 64-bit integer type, but I assume you cannot do this. Since you want your answer in 32-bit integer, the high-order value of microseconds cannot be greater than 999, or it would not fit in 32-bit after division by 1000. So the bigger number of microseconds you're operating with is 999 * 2^32 + (2^32 - 1) = 4294967295999. It gives you 13 decimal digits and you can use double to handle precise division.

If you are forced for some reason to use only 32-bit integers, the answer of Michał Górny gives you an approximate solution. E.g. for whole_number = 1234567890123 it will give a result of 1234567805. Because dividing of max 32-bit int on 1000 have a reminder.

The only way to have an exact answer with 32-bit integer is by using long arithmetic. It requires long digits to be stored in a type which can be extended to store a reminder. You have to split your two 32-bit integers in four 16-bit digits. After that you can divide it as on paper and you have enough bits to store a reminder. See the code of micro2milli:

#include <iostream>

typedef unsigned __int32 uint32;
typedef unsigned __int64 uint64;

const uint32 MAX_INT = 0xFFFFFFFF;

uint32 micro2milli(uint32 hi, uint32 lo)
{
  if (hi >= 1000)
  {
    throw std::runtime_error("Cannot store milliseconds in uint32!");
  }

  uint32 r = (lo >> 16) + (hi << 16);
  uint32 ans = r / 1000;
  r = ((r % 1000) << 16) + (lo & 0xFFFF);
  ans = (ans << 16) + r / 1000;

  return ans;  
}

uint32 micro2milli_simple(uint32 hi, uint32 lo)
{
  lo /= 1000;
  return lo + (hi % 1000) * 4294967L;
}

void main()
{
  uint64 micro = 1234567890123;
  uint32 micro_high = micro >> 32;
  uint32 micro_low = micro & MAX_INT;

  // 1234567805
  std::cout << micro2milli_simple(micro_high, micro_low) << std::endl;
  // 1234567890
  std::cout << micro2milli(micro_high, micro_low) << std::endl;
}

First, put your two variables into 3 with 22 significant bits each.

uint32_t x0 = l & 0x3FFFFF;
uint32_t x1 = ((l >> 22) | (h << 10)) & 0x3FFFFF;
uint32_t x2 = h >> 12;

Now do the division (there is 10 available bits per x?, and 1000 < 2^10 = 1024 so there is no overflow possible)

uint32_t t2 = x2 / 1000;
x1 |= (x2 % 1000) << 22;
uint32_t t1 = x1 / 1000;
x0 |= (x1 % 1000) << 22;
uint32_t t0 = (x0 + 500) / 1000;
    /* +0 for round down, +500 for round to nearest, +999 for round up */

Now put back things together.

uint32_t r0 = t0 + t1 << 22;
uint32_t r1 = (t1 >> 10) + (t2 << 12) + (r0 < t0);

Using the same technique but with four variables holding 16 bits, you can do it for divisor up to 65535. Then it becomes harder to do it with 32 bits arithmetic.

Assuming you can not use a 64 bit int for this, I would suggest using a multiple precision library, like GMP.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top