<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Dear co-scilabers,<br>
<br>
</p>
<p>The non-linear fitting function datafit() could be very useful.
However, it has presently two important drawbacks that hinder it
and somewhat prevent actually using it. Beyond fixing this, other
improvements are possible.<br>
</p>
<p>We propose here to upgrade datafit() in the way described
here-below and in the <a href="http://bugzilla.scilab.org/15344">bug
report 15344</a>. All these propositions are back-compatible:<br>
</p>
<ol>
<li><b>Data weights s</b><b>h</b><b>ould be accepted</b>.<br>
Presently, it is not possible to specify some data weights. Only
the different ways to assess the Model-to-data distance can be
weighted. Yet, in the true life, experimental data are almost
always qualified with respective uncertainties. The invert of
uncertainties are a good first assessment of possible data
weights. <br>
This SEP proposes to add a new <b>Wz option</b>, to be provided
just after the data points Z :<span class="functionid"><br>
datafit</span><span class="default">([</span><span
class="default">iprint</span><span class="default">,] </span><span
class="default">G</span><span class="default"> [,</span><span
class="default">DG</span><span class="default">],</span><b><span
class="default">Z</span></b><b><span class="default"> [,</span></b><b><span
class="default">Wz</span></b><span class="default"><b>] </b>[,</span><span
class="default">Wg</span><span class="default">][,</span>'<span
class="default">b</span>'<span class="default">,</span><span
class="default">p_inf</span><span class="default">,</span><span
class="default">p_sup</span><span class="default">], </span><span
class="default">p0</span><span class="default"> [,</span><span
class="default">algo</span><span class="default">][,</span><span
class="default">stop</span><span class="default">]).<br>
It must be a row of size(Z,2) of real numbers. Data weights Wz
are taken into account in the cost function to be minimized,
in the following way (Wg being the matrix of gaps weights,
named W in the current documentation):<br>
</span>
<pre>f = (G(p, Z(:,1))' * Wg * G(p,Z(:,1)))* Wz(1) + ..
(G(p, Z(:,2))' * Wg * G(p,Z(:,2)))* Wz(2) + ..
...
(G(p, Z(:,nz))' * Wg * G(p,Z(:,nz)))* Wz(nz)
</pre>
If only one gap definition is implemented, this cost is
simplified into the common weighted least squares<br>
<br>
<tt>f = G(p, Z(:,1))^2 * Wz(1) + .. </tt><tt><br>
</tt><tt> G(p, Z(:,2))^2 * Wz(2) + .. </tt><tt><br>
</tt><tt> ... + ..</tt><tt><br>
</tt><tt> G(p, Z(:,nz))^2 * Wz(nz)<br>
<br>
</tt>In the overhauled datafit help page, the first example
illustrates the gain in fitting accuracy when data are weighted.<tt>
</tt><br>
<br>
</li>
<li><b>datafit() should be vectorized for the vector of data
points</b>.<br>
Presently, if the gap function G(p,Z) computing the
model-to-data-point distance(s) is vectorized for the data
points Z, i.e. if it is able to compute distances to all data
points in one single call in a fast vectorized way, datafit() is
unable to use this feature: It calls G() as many times as there
are data points, explicitly looping over points, in Scilab
language. This clearly makes datafit() (very) slow. We propose
here to make datafit() able</li>
<ol>
<li>to detect whether G(p,Z) is vectorized or not</li>
<li>to call G(p,Z) only once whether G(p,Z) is vectorized</li>
<li>to still loop over Z(:,i) points whether G(p,Z) is not
vectorized (back-compatibility)<br>
<br>
The gap function G(p,Z) must then return a (ng x nz) matrix
(instead of (ng x 1)), where nz = size(Z,2) is the number of
data points.<br>
A first test for 200 data points has shown that datafit() is
fastered by a factor of ~100 (from >50 s to 0.5 s).<br>
This kind of acceleration unlocks the function for actual
usages.<br>
<br>
</li>
</ol>
<li><b>For the "qn" quasi-newton algorithm, the termination status
should be returned</b>.<br>
In contrary to optim() that datafit() uses, datafit() does not
return any termination status about the minimization
convergence. This prevents to qualify the result, while the
convergence may be bad due to initial parameters too far from
actual ones. This makes suspectable all datafit() results, even
when they are excellent.<br>
We propose to return the termination <i>status</i> as returned
by optim() for the "qn" algo, as a third optional output
argument. When another algorithm is used ("gc" or "nd"), <i>status</i>
will be set to %nan instead (no actual value returned by
optim()).<br>
<b><br>
</b></li>
<li><b>It should be possible to specify initial parameters p0 and
their lower and upper bounds pinf and psup as a matrix,
instead of only a mandatory column</b>.<br>
Actually, a matrix may be sometime more readable and handy. For
instance, if the model is the sum of N normal laws, p as<br>
<tt>[mean1 mean2 .. meanN</tt><tt><br>
</tt><tt> stDev1 stDev2 .. stDevN</tt><tt><br>
</tt><tt> y1 y2 .. yN ]</tt><tt><br>
</tt>is more handy than <br>
<tt>[mean1 stDev1 y1 mean2 stDev2 y2 .. meanN stDevN yN]'</tt><br>
noticeably in the G() gap function where a loop over
columns/laws then becomes possible and welcome.<br>
<br>
</li>
<li><b>The overall least square "error" fmin = value of the
minimized cost function should be replaced with the average
model-to-data-points distance.<br>
</b>Presently, the returned "err" = fmin 2nd output argument is
the value of the minimized cost function f. This raw
unnormalized algorithmical output is not really explanatory and
practical. It measures the Model-to-data distance only in a
quite indirect way:</li>
<ul>
<li>if we double (clone) each data point, err is doubled<br>
</li>
<li>if we double (clone) the used gap criteria, err is doubled<br>
<br>
althought in both cases the average Model-to-data distance is
the same.<br>
The true average minimal Model-to-data distance dmin reached
can be computed as <code class="literal">sqrt(fmin/ng/nz)</code>
or <code class="literal">sqrt(fmin/ng/sum(Wz))</code> where
ng is the number of gap criteria, nz the number of unweighted
data points, and Wz the vector of data weights<code
class="literal">.</code> Since it is normalized against ng,
nz or Wz, this <code class="literal">out</code>put is more
relevant and handy to qualify the performed fitting and to
compare several fittings.<code class="literal"> </code>To
provide it when datafit() returns, two ways are possible:<br>
<br>
</li>
<ul>
</ul>
<li><b>Either dmin replaces err=fmin as the 2nd output argument</b>.
This is what we propose here. But this is not a
back-compatible evolution. We think that, because datafit() is
presently very slow and quite poor, and this current err not
handy, it has been rarely used up to now. The grey impact of
this improvement should then only be light, much smaller than
the positive gain we would get from it.<br>
<br>
</li>
<ul>
</ul>
<li><b>Or dmin could be added as a new third output</b> (and the
<i>status</i> as a fourth output).<br>
<br>
</li>
<ul>
</ul>
</ul>
<li>Finally, <b>the help page should be overhauled</b> (<a
href="http://bugzilla.scilab.org/7732">bug 7732</a> and more)<br>
<br>
</li>
</ol>
The overhauled help page for this upgrade is <a
href="https://codereview.scilab.org/cat/19661%2C2%2Cscilab/modules/optimization/help/en_US/nonlinearleastsquares/datafit.pdf%5E0">there</a>.<br>
This upgrade targets Scilab 6.1.0 (~2020 ?)<br>
<br>
Do you need or already use datafit() ?<br>
<br>
Hope reading your comments, noticeably about the choice about the
5th item,<br>
<br>
Best regards<br>
Samuel Gougeon<br>
<br>
<ol>
</ol>
</body>
</html>