데이터 세트에서 이상치를 제외하기위한 효율적이고 정확한 알고리즘은 무엇입니까?

StackOverflow https://stackoverflow.com/questions/2069793

  •  20-09-2019
  •  | 
  •  

문제

200 개의 데이터 행이 있습니다 (작은 데이터 세트를 암시합니다). 통계 분석을 수행하고 싶지만 그 전에는 이상치를 배제하고 싶습니다.

목적을위한 잠재적 조류는 무엇입니까? 정확도는 우려의 문제입니다.

나는 통계를 처음 접 했으므로 매우 기본적인 조류에 도움이 필요합니다.

도움이 되었습니까?

해결책

Start by plotting the leverage of the outliers and then go for some good ol' interocular trauma (aka look at the scatterplot).

Lots of statistical packages have outlier/residual diagnostics, but I prefer Cook's D. You can calculate it by hand if you'd like using this formula from mtsu.edu (original link is dead, this is sourced from archive.org).

다른 팁

Overall, the thing that makes a question like this hard is that there is no rigorous definition of an outlier. I would actually recommend against using a certain number of standard deviations as the cutoff for the following reasons:

  1. A few outliers can have a huge impact on your estimate of standard deviation, as standard deviation is not a robust statistic.
  2. The interpretation of standard deviation depends hugely on the distribution of your data. If your data is normally distributed then 3 standard deviations is a lot, but if it's, for example, log-normally distributed, then 3 standard deviations is not a lot.

There are a few good ways to proceed:

  1. Keep all the data, and just use robust statistics (median instead of mean, Wilcoxon test instead of T-test, etc.). Probably good if your dataset is large.

  2. Trim or Winsorize your data. Trimming means removing the top and bottom x%. Winsorizing means setting the top and bottom x% to the xth and 1-xth percentile value respectively.

  3. If you have a small dataset, you could just plot your data and examine it manually for implausible values.

  4. If your data looks reasonably close to normally distributed (no heavy tails and roughly symmetric), then use the median absolute deviation instead of the standard deviation as your test statistic and filter to 3 or 4 median absolute deviations away from the median.

You may have heard the expression 'six sigma'.

This refers to plus and minus 3 sigma (ie, standard deviations) around the mean.

Anything outside the 'six sigma' range could be treated as an outlier.

On reflection, I think 'six sigma' is too wide.

This article describes how it amounts to "3.4 defective parts per million opportunities."

It seems like a pretty stringent requirement for certification purposes. Only you can decide if it suits you.

Depending on your data and its meaning, you might want to look into RANSAC (random sample consensus). This is widely used in computer vision, and generally gives excellent results when trying to fit data with lots of outliers to a model.

And it's very simple to conceptualize and explain. On the other hand, it's non deterministic, which may cause problems depending on the application.

Compute the standard deviation on the set, and exclude everything outside of the first, second or third standard deviation.

Here is how I would go about it in SQL Server

The query below will get the average weight from a fictional Scale table holding a single weigh-in for each person while not permitting those who are overly fat or thin to throw off the more realistic average:

  select w.Gender, Avg(w.Weight) as AvgWeight
    from ScaleData w
    join ( select d.Gender, Avg(d.Weight) as AvgWeight, 
                  2*STDDEVP(d.Weight) StdDeviation
             from ScaleData d
            group by d.Gender
         ) d
      on w.Gender = d.Gender
     and w.Weight between d.AvgWeight-d.StdDeviation 
                      and d.AvgWeight+d.StdDeviation
   group by w.Gender  

There may be a better way to go about this, but it works and works well. If you have come across another more efficient solution, I’d love to hear about it.

NOTE: the above removes the top and bottom 5% of outliers out of the picture for purpose of the Average. You can adjust the number of outliers removed by adjusting the 2* in the 2*STDDEVP as per: http://en.wikipedia.org/wiki/Standard_deviation

If you want to just analyse it, say you want to compute the correlation with another variable, its ok to exclude outliers. But if you want to model / predict, it is not always best to exclude them straightaway.

Try to treat it with methods such as capping or if you suspect the outliers contain information/pattern, then replace it with missing, and model/predict it. I have written some examples of how you can go about this here using R.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top