Question

I tried to minimize the following function:

func <- function(qq){
  x <- qq[1]
  y <- qq[2]
  output <- 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2
  return(output)
}

when x+y=1 and 0<=x,y<=1. To use gosolnp in Rsolnp package, firstly, I defined cons to use it in eqfun argument:

cons <- function(qq)
  sum(qq)

Then I applied gosolnp function:

install.packages("Rsolnp")
require(Rsolnp)
gosolnp(fun = func, LB = c(0, 0), UB = c(1, 1), eqfun = cons, eqB = 1)

res$pars
[1] 0.8028775 0.1971225
res$value
[1]  2.606528e-09 -5.551115e-17

the answer should be x = 0 and y = 1, but as you can try in every run of gosolnp you will get new points which func is approximately 0 at that points (and not exactly). Mathematica and Maple do optimization for this function very fast and give the true answer which is x = 0 and y = 1, but instead every run in R gives a new solution which is not correct.

I also tried another optimization function as spg() in alabama or DEoptim, but the problem remained unsolved.

So my question are:

1- is there any solution that I can minimize func in R?

2- is there any difference between precision in R and Mathematica and why Mathematica could give me the exact answer but R not?

Thank you in advance

Was it helpful?

Solution

If you have two variables x and y, with y = 1 - x, then you really have a problem in just one variable x. Noting that, you can reparametrise your function to be

1 - 2 * x + x^2 - 2 * (1 - x) + 2 * x * (1 - x) + (1 - x)^2

and going through the algebra shows that this is constant as a function of x. Thus any value of x in (0, 1) is a solution, and which one your algorithm converges to will basically be random: based on numerical roundoff and your choice of starting point.

The fact that gosolnp's returned value is zero to within the limits of numerical precision should have been a tipoff, or even just plotting the curve.

OTHER TIPS

I can't speak to these particular packages, but nloptr(...) in package nloptr seems to work well:

# Non-Linear Optimization (package::nloptr)
F <- function(v){  
  x=v[1]
  y=v[2]
  output <- 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2
}
Hc <- function(v) return(1-sum(v))

library(nloptr)
opt <- nloptr(x0=c(1/2,1/2), eval_f=F, lb = c(0,0), ub = c(1,1), 
              eval_g_eq = Hc, 
              opts = list(algorithm="NLOPT_GN_ISRES",maxeval=1e6))
opt$solution
# [1] 0.0005506997 0.9994492982

Your function is identically equal to 0 so there is no point in trying to minimize it.

library(Ryacas)
x <- Sym("x")
y <- 1-x
Simplify( 1 - 2 * x + x^2 - 2 * y + 2 * x * y + y^2)

which gives:

expression(0)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top