Question

i was wondering what is the overhead of using the time command in unix.

i know how to use it, but i want to know how much longer the command

$ time java HelloWorld

takes on a terminal, than the command

$ java HelloWorld

I am specifically interested in how this overhead varies with the time duration of the program that is running.

context:: I am using it to measure the time taken for a bunch of long running experiments written in Java.

Was it helpful?

Solution

The overhead is fixed and, based on the source code, is only due to the fact that an extra process is being started (the time process itself), introducing a small amount of extra processing (a). Normally, the shell would start your program but, in this case, the shell starts time and time starts your process (with a fork).

This extra processing involves:

  • argument processing.
  • the time taken to fork and exec the child.

While the process being measured is running, time itself is simply waiting for it to exit (with a wait call) so has no impact on the process.

So, while the start-up time for the time process is actually included in the measurements, these will only be significant for very short processes. If your process runs for an appreciable amount of time, the overhead of time is irrelevant.

As to what I mean by appreciable, you can see the effect time has by running it with a very fast executable, and also see if it has any appreciable increase in overhead for longer-running processes:

pax> time sleep 0
real    0m0.001s
user    0m0.000s
sys     0m0.000s

pax> time sleep 1
real    0m1.001s
user    0m0.000s
sys     0m0.000s

pax> time sleep 10
real    0m10.001s
user    0m0.000s
sys     0m0.004s

pax> time sleep 100
real    1m40.001s
user    0m0.000s
sys     0m0.000s

In other words, hardly any effect at all.

Now, since you're only likely to be timing processes if they're long-running (it's hard to care whether a single process takes one or two milliseconds unless you're running it many times in succession, in which case there are better ways to increase performance), the fixed overhead of time gets less and less important.


(a): And, if you're using a shell with time built in (such as bash with its time reserved word), even that small overhead disappears.

OTHER TIPS

Overhead of time should be fairly constant regardless of the program being timed. All it has to do is take a timestamp, run the program, take another timestamp and output a result.

In terms of accuracy etc: the shorter the program you are running is, the more impact time will have on it. e.g. time on "Hello World" is probably not going to give you good results. time on something that runs for a decent period will be very accurate since time's overhead will be well down in the noise.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top