I need to model the latency with several system configuration (single core, multi core, multi node on same server, multi servers) of an LTE simulator. Does anyone have any idea how to calculate the computation amount of a source code (or a part of the whole code, if I want to)? I think the possible approaches are:

  1. Take the difference in timestamp at the start and end of the execution using clock()
  2. Total no of operators/Instruction per second(machine dependent)
  3. Total no of instructions/Instruction per second

3rd is the more general version of 2nd.

The simulator is in Matlab, and I am free to use c (through Mex files).

有帮助吗?

解决方案

Matlab has a very easy way to do this build in. use the following code in a script file:

tic;

operations...;

toc;

This automatically prints out the time elapsed for the set of commands used. Hope this helps

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top