Question

I've tried searching for answers, but couldn't find one that exactly match my problem.

I'm doing a stochastic simulator of biological systems, where the outcome is a "Scatter-plot" time series with concentration levels at some random points in time. Now i would like to be able to take the average time-series of multiple simulation runs and are in doubt how to proceed as up to 500 simulation runs, each with several thousands measurements, can be expected.

Naturally, i could "bucket" the intervals probably losing some precision or try to interpolate the missing measurements. But what is the preferred method in my case?

This has to be implemented in Java and i would prefer a citation to a paper that explains the method.

Thanks!

Was it helpful?

Solution

If you want a book, Simulation Modeling & Analysis by Law or Discrete Event System Simulation by Banks, Carson, Nelson & Nicol both devote several chapters to time series output analysis. For "breaking news", there are several analysis tracks that have papers on recent developments in the field in the Paper Archives section at WinterSim.org. For a flow-chart of how to decide what type of analysis may be appropriate, see Figure 4 on p.60 of this tutorial paper from WinterSim 2007.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top