Error in the result of the optimum function of r

3

I am simulating values of the Birnbaum-Saunders distribution with alpha and beta parameters and then try to estimate these values using the optimum function, although the simulation converges, I am getting the error below, how to correct this error:

n<-1000
z<-rnorm(n,0,1)
alpha=2
beta=2
t<-beta*((alpha*z/2)+sqrt((alpha*z/2)^2+1))^2
remove(par,alpha,beta)  

Loglik<-function(par){
  alpha=par[1]
  beta=par[2]
  ll<-sum(log(t+beta))-n*log(2*alpha)-(n/2)*(log(2*pi*beta))-(3/2)*sum(log(t))-((1/(2*alpha^2))*sum((t/beta)+(beta/t)-2))
  return(-ll)
}
alpha_0=1
beta_0=5
start=c(alpha_0,beta_0)
optim(start,fn=Loglik,method="BFGS",hessian=T)$par

Result:

[1] 2.018785 2.133996
There were 25 warnings (use warnings() to see them)

The errors are:

> warnings()
Mensagens de aviso:
1: In log(t + beta) : NaNs produced
2: In log(2 * pi * beta) : NaNs produced
3: In log(t + beta) : NaNs produced

I've checked my verisimilitude, which is correct, can not find the error in the code?

    
asked by anonymous 20.05.2016 / 03:27

2 answers

5

This is not a bug itself, it is a warning ( warnings ) that something happened in a way it probably should not, but that the code was executed completely.

In your case, they are given because the function tried to get the logarithm of a negative number. Luckily, let's say, the optim function can deal well with it and ignores the problem, so much so that the result is correct.

Anyway, you can do a few things to avoid warnings:

  • You can use supressWarnings() to hide warnings. They will remain there, but hidden:
  • suppressWarnings(optim(start,fn=Loglik,method="BFGS",hessian=T)$par)
    
  • You can prevent negative values from being used in the calculation of ll by adding the following line before the operation:
  • if (any(c(t, beta, alpha, pi) < 0)) return(NA)
    

    Note that these "solutions" are just makeup to avoid warnings. A more elegant (and probably more correct) solution would involve analyzing why the algorithm is generating alpha and beta negative values, and deciding if that alone is a problem (it seems not), to make a choice the problem at the source (in the optimization method or in the function being optimized).

        
    20.05.2016 / 06:09
    2

    I would like to highlight another way I learned today that can also help other people, which is an interesting tip.

    Replace the log and alpha parameters with beta with exp(lalpha) , and then exp(lbeta) and lalpha_0=log(alpha_0) . The result will be lbeta_0=log(beta_0) and exp(par[1]) .

        
    21.05.2016 / 05:04