Question

First, I have no idea wether the professor gave the wrong question. Anyway, I tried to generate F(x)~U(0,1), where CDF F(x)=1-(1+x)exp(-x) (For this CDF, you could not calculate x=g(F(x)) by hand). And then calculate the root of F(x) to achieve what the question want.

Because the root range from 0 to INF, uniroot() is out of question. Therefore, I use Newton Method to write one.

Then, my code is like this:

f=function(x) {
      ifelse(x>=0,x*exp(-x),0)
  }

in.C=function(n) {
         a=runif(n)
         G=NULL
         for(i in 1:n) {
             del=1
             x=2
             while(abs(del)>1e-12){
               del=(1-(1+x)*exp(-x)-a[i])/f(x)
               x=x-del

             }
             G[i]=x
         }
         G
     }
system.time(tt<-in.C(100000))

However, if the F(x) is too small, and one step in Newton Method, the result may be less than zero, then errors will happen. Further, I revised my code like this:

f=function(x) {
      ifelse(x>=0,x*exp(-x),0)
  }

in.C=function(n) {
         a=runif(n)
         G=NULL
         for(i in 1:n) {
             del=1
             x=2
             while(abs(del)>1e-12){
               if(x>=0){    del=(1-(1+x)*exp(-x)-a[i])/f(x)
                   x=x-del
                   }
                   else break
             }
             if(x>=0) G[i]=x
         }
         G[!is.na(G)]
     }
system.time(tt<-in.C(100000))
hist(tt, breaks=70, right=F, freq=F)
curve(f(x),from=0,to=20,add=T)

Clearly, the code is wrong, because I rejected the result near zero.

So, my quetion is whether my code can be revised to calculate right, if not, whether there is another way to do it. Any assitance is appreciated.

Was it helpful?

Solution

You can use uniroot(...) for this.

[Note: If the point of this exercise is to implement your own version of a Newton Raphson technique, let me know and I'll delete the answer.]

If I'm understanding this correctly, you want to generate random samples from a distribution with probability density function f and cumulative density F where

f = x*exp(-x)
F = 1 - (1+x)*exp(-x)

As you imply, this can be done by generating a random sample from U[0,1] and transforming that according to the inverse CDF of F. The procedure is very similar to the ones posted here and here, except that you already have an expression for the CDF.

f <- function(x) x*exp(-x)
F <- function(x) 1-(1+x)*exp(-x)

F.inv <- function(y){uniroot(function(x){F(x)-y},interval=c(0,100))$root}
F.inv <- Vectorize(F.inv)

x <- seq(0,10,length.out=1000)
y <- seq(0,1,length.out=1000)

par(mfrow=c(1,3))
plot(x,f(x),type="l",main="f(x)")
plot(x,F(x),type="l",main="CDF of f(x)")
plot(y,F.inv(y),type="l",main="Inverse CDF of f(x)")

Then, generate X ~ U[0,1] and Z = F.inv(X).

set.seed(1)
X <- runif(1000,0,1)   # random sample from U[0,1]
Z <- F.inv(X)

par(mfrow=c(1,1))
hist(Z, freq=FALSE, breaks=c(seq(0,10,length=30),Inf), xlim=c(0,10))
lines(x,f(x),type="l",main="Density function", col="red",lty=2)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top