Pergunta

I've written a variation of the cumsum function, where I multiply the previous sum by a decay factor before adding the current value:

decay <- function(x, decay=0.5){
  for (i in 2:length(x)){
    x[i] <- x[i] + decay*x[(i-1)]
  }
  return(x)
}

Here's a demo, using a binary variable to make the effect clear:

set.seed(42)
Events <- sample(0:1, 50, replace=TRUE, prob=c(.7, .3))
plot(decay(Events), type='l')
points(Events)

Rplot

Compiling this function speeds it up a lot:

#Benchmark
library(compiler)
library(rbenchmark)
cumsum_decayCOMP <- cmpfun(cumsum_decay)
Events <- sample(0:1, 10000, replace=TRUE, prob=c(.7, .3))
benchmark(replications=rep(100, 1),
          cumsum_decay(Events),
          cumsum_decayCOMP(Events),
          columns=c('test', 'elapsed', 'replications', 'relative'))

                      test elapsed replications relative
1     cumsum_decay(Events)    3.28          100    6.979
2 cumsum_decayCOMP(Events)    0.47          100    1.000

But I suspect vectorizing would improve it even more. Any ideas?

Foi útil?

Solução

Try the filter function:

filter.decay <- function(x, decay=0.5) filter(x, decay, method = "recursive")

It is very fast:

#                       test elapsed replications relative
# 1     cumsum_decay(Events)    4.83          100    19.32
# 2 cumsum_decayCOMP(Events)    1.00          100     4.00
# 3     filter.decay(Events)    0.25          100     1.00
Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top