[Scilab-users] Increment of parameters in optim function

Pierre Vuillemin contact at pierre-vuillemin.fr
Tue Feb 25 19:22:50 CET 2014


Dear Romain,

Just some hints concerning one of the underlying algorithm of optim, the
"quasi newton" one with a Wolfe type line search : this algorithm is a
descent algorithm, i.e given an initial point, it will find a local
minimun of your objective function "near" (in some sense) to the initial
point. 
The main ideas are the following : 
- at each iteration, the gradient of the objective function at the
current position x is computed (and the approximation of the Hessian
which explains the name of "quasi Newton" method). The descent
direction, ie the direction which decreases the function, is given by
the opposite of the gradient, hence the algorithm has to go this way.
But how far? The length of the step in that direction is given by a line
search algorithm
- in this case, it is a Wolfe type line search which means that the step
length is choosen so that the next point ensures (i) a sufficient
decrease in the objective function AND (ii) leads to a sufficient
decrease in the slope of the function (some variations exist between the
Wolfe and strong Wolfe conditions).

The algorithm stops when a local minimum is reached. Concerning your
question, you cannot force the algorithm to move further between two
iterations.
This means that in your case, your initial point may be close to a local
minimum.

Some hints :
- since you only have 2 optimization parameters, you can plot your
objective function to see where are the local minima, or to give you
some indications on what the initial point should be.
- try different initial points to (hopefully) find different local
minima and choose the best one.
- you may want to try evolutionary algorithms (genetic algorithms or
particle swarm optimization) which do not get stuck in local minima. I
think that there is a built-in function or an atom module containing
such algorithm.
(- there exist plenty of other optimization algorithms which may (or may
not) be better suited for your problem depending on its structure)

Hope this helps,

Best regards,

Pierre

Le mardi 25 février 2014 à 16:15 +0100, Romain Desbats a écrit :
> Dear Scilab Community,
> 
> I am using the optim function to optimise a two-variable problem
> (called x1 and x2). During the optimisation I monitor the values of
> those variables as well as the value of the cost function to minimise.
> 
> I find that after 36 iterations that the values of the parameters "did
> not change much":
> 
> iteration 1 : x1=765       x2=12900 f=1.402D+08
> iteration 36: x1=764.82264 x2=12900 f=1.400D+08
> 
> Is there a way to force the increment value of the parameters that are
> being optimised? For instance x1 could only be equal to 765, 764,
> 763...
> 
> My question is similar to this one but I did not find further
> discussion:
> http://lists.scilab.org/pipermail/bugzilla/2010-February/000638.html
> 
> Thanks a lot for your help.
> 
> Romain
> 
> _______________________________________________
> users mailing list
> users at lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/users





More information about the users mailing list