Question

I am trying to find a trend in several datasets. The trends involve finding the best fit line, but if i imagine the procedure would not be too different for any other model (just possibly more time consuming).

There are 3 conceivable scenarios:

  1. All good data where all the data fits a single trend with a low variability
  2. All bad data where all or most of the data exhibits tremendous variability and the entire dataset must be discarded.
  3. Partial good data where some of the data may be good while the rest needs to be discarded.

If the net percentage of data with extreme variability is too high then the entire set must be discarded. This implies that there is essentially only this type of data and the percentage of bad data varies:

0% bad = Case 1
100% bad = Case 2

I am only looking for contiguous sections with low variablity; i.e. I don't care if there are some individual points that fit the trend

What I am looking for is a smart way to subsection section the dataset and search for the specified trend. As is the nature of the problem, I am not looking for sections that best fit the overall trend. I understand that the subsection with "cleaner" data will end up having slightly different trendline properties than the overall (which would contain the outliers). This is exactly what i want since this part of the data would best best reflect the actual trend.

I am fluent in C++ but, since I am trying to make the code open source and cross-platform, I am stick to ISO C++ standards. This implies no .NET but if you have a .NET example I would appreciate if you could also help me convert it to ISO C++. I also have knowledge of JAVA, some assembly and fortran.

The datasets themselves are not huge but there are about 150 million of them and so brute force may not be the best way.

Thanks in advance


I understand that I have left some things up in the air and so let me clarify:

  • Each dataset can, and probably will, have different trends; i.e. I am not looking for the same trend throughout all datasets.
  • The program user will define how close a fit they want
  • The program user will define how contiguous the subset must be before it its considered for trend fitting
  • In case the program is extended to allow for any type of fit (not simply linear), the user will define what model is to be fit -- THIS IS NOT A PRIORITY and if the above query is solved then I am sure this expansion would be relatively trivial
  • The outliers come about as a result of the nature of the experiment and the data acquisition technique whereby data from "bad" sections must still be collected even though these areas are known to give outliers. The discarding of these outliers DOES NOT imply that the data is being manipulated to fit any trend (statistics disclaimer, hehe).
Was it helpful?

Solution

The RANSAC algorithm is one approach to what you're looking for if I understand you right. http://en.wikipedia.org/wiki/RANSAC

OTHER TIPS

You might use the term "outlier" in your searches. An outlier is a particular point of data that represents either a special condition not captured in the experiment design, or a statistical fluke (a point grabbed from the exstreams of the distribution in a data set too small to expect that too happen).

Outlier elimination carries some risk of biasing the result by your expectation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top