There is no problem with this setting or code. Here's what happens when I change the starting kick to 0.9
:
y=c(1.1178329,1.0871448,1.0897010,1.0759255,1.0535190,0.8725332)
x=c(6,5,4,3,2,1)
library(minpack.lm)
mod1 <- nlsLM( y ~ x^(-p),
start = list(p = 0.9) ,
trace = TRUE, lower=0.1, upper=1)
It. 0, RSS = 2.99354, Par. = 0.9
It. 1, RSS = 0.246237, Par. = 0.1
It. 2, RSS = 0.246237, Par. = 0.1
More iterations occur as expected. If I keep the initial value fixed and reduce the lower bound of the search grid to 0.00001
, see what happens:
mod2 <- nlsLM( y ~ x^(-p),
start = list(p = 0.1) ,
trace = TRUE, lower=0.00001, upper=1)
It. 0, RSS = 0.246237, Par. = 0.1
It. 1, RSS = 0.0544138, Par. = 1e-05
It. 2, RSS = 0.0544138, Par. = 1e-05
Apparently, the ideal value of p
for this data set is pretty close to 0. Note that including the RSS value ( Residual Sum of Squares ) decreases from mod1
to mod2
, indicating that the second setting is better than the first.
In summary, there is nothing wrong with the code. He's just converging too fast.
It can also be that the value of p
is not positive. It may be that the% w / w value that best fits the data does not belong to the range determined by your initial setting, which is between 0.1 and 1. If this is true, the value of p
will converge to something at the boundary of defined in the function call. Did you come to think of this hypothesis?
See the code below, for example:
mod3 <- nlsLM( y ~ x^(-p),
start = list(p = 0.1) ,
trace = TRUE, lower=-1, upper=1)
It. 0, RSS = 0.246237, Par. = 0.1
It. 1, RSS = 0.0218977, Par. = -0.0816128
It. 2, RSS = 0.0166599, Par. = -0.0607616
It. 3, RSS = 0.0166587, Par. = -0.0604426
It. 4, RSS = 0.0166587, Par. = -0.0604428
p
resulted in the lowest RSS of all. In fact, it has more of an adjustment factor, as it took longer to arrive at an answer, with more interactions and a smoother convergence of the parameter value.