質問

I am running QNX,

I used a function to get clock cycles per second,

uint64_t clockPerSec = getCPS();
uint64_t currentClockCycle = getCurrentCycle();

functions

uint64_t getCPS()
{
   return (~(uint64_t)0) /SYSPAGE_ENTRY(qtime) -> cycles_per_sec;
}

uint63_t getCurrentCycle()
{
   return ClockCycles();
}

then after running a function

I do

currentClockCycle = getCurrentCycle() - currentClockCycle;

I am not using it through the whole applications, so I dont have overruns/overflow of the clock, just to measure one function performance after some additions/changes.

anyway, I am just wondering if I am getting the right output.

I calculated the result this way,

double result = static_cast<double>(clockPerSec)/currentClockCycle;
// this get me the time in second??
// then multiplied it by 1000000 to get a micro-sec measurement 

am I doing anything wrong?

when using

ftime(&t_start);

then

ftime(&t_end);

and output the difference this way, I see that the time I get is bigger, almost twice

first method I get 0.6 ms second one using ftime I get the result 1.xx ms

役に立ちましたか?

解決

you just mix (measure) two different things: clock cycles is a count of CPU ticks kernel gives to your app -- it doesn't count other apps running by same kernel. while ftime returns an absolute (wall clock) time. so difference between two ftimes return an absolute time duration between two time points. while the first counts just measure how many CPU ticks your (and only your) app have consumed... so you may consider this as exclusively your app running time -- i.e. if there is no other apps at this host, first and second measurements will be (theoretically) equal (or so).

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top