Pergunta

I'm implementing some estimation metrics that take samples of optimisation functions and estimate their properties. One of the metrics requires the data to be sorted; however, since the metric is only an estimation, I was wondering if I could get a comparatively accurate result if the data wasn't sorted precisely, but well enough.

I would expect data could be nearly sorted in less time than actually sorting it. However, I can't find any algorithms that will do this, which I find rather odd, so I thought I'd ask around here.

My idea of "imperfectly sorted" is quite loose. It could, for example, mean:

  • Many elements are perfectly sorted, but some are not
  • Adjacent elements are not necessarily in the correct order, but when comparing groups of elements, the elements increase on average; if plotted, there would be a visible general upwards trend

For the first definition above, I suppose you could take a sub-sample of the data and just quick-sort that, but I don't imagine it'll get you much of a performance improvement without seriously reducing the quality of the estimation.

It might also be possible to take something like quicksort and limit the depth of the recursion.

Do imperfect sorting algorithms exist? Can they significantly out-perform regular sorting algorithms?

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top