[scilab-Users] analogy for Matlab fminunc function

Michaël Baudin michael.baudin at scilab.org
Tue Jun 23 09:10:23 CEST 2009


Hi,

It is true that the optim command is designed to provide the same
feature. But notice that there are some differences in the numerical 
algorithms
used in fminunc and in optim. The following is a list of the most important
differences between the two algorithms.

* The optim function manages also bound constraints, while
fminunc does not (it is provided by fmincon in Matlab).
* The optim function manages also nonsmooth functions (with the "nd" 
option),
while fminunc does not.
* The fminunc function manages the sparsity structure of the Hessian,
while optim does not.
* The fminunc function provides a LargeScale option, where
optim provides two distinct algorithms
    - "qn" for small to medium scale problems,
    - "gc" for medium to large scale problems.
* The fminunc function provides BFGS, DFP and steepest descent
methods while optim is based only on the BFGS formula (both in qn and in 
gc).
Notice that the BFGS is known as a better formula over DFP
(steepest descent is not recommended for practical problems).
* The medium-scale algorithm in fminunc is based on the update
of the Hessian matrix with BFGS and on the update
of the inverse Hessian matrix with DFP.
In optim, the "qn" algorithm is based on the update of the Cholesky
factors of the Hessian matrix updated with the BFGS formula.
* The large-scale algorithm in fminunc is based on a trust-region
algorithm, while the "gc" algorithm in optim is based on a line-search
with cubic interpolation.

Do you try to translate an existing Matlab script or is it for a fresh
development ?

Best regards,

Michaël Baudin





More information about the users mailing list