[Scilab-users] Linear Least-Squares Fitting

Heinz Nabielek heinznabielek at me.com
Wed Jan 5 15:49:46 CET 2022


Scilab can easily do linear least-squares fitting by inversion of a rectangular matrix, like so:

np = 100; noise = 0.7*(rand(np,1)-0.5);
x = linspace(0,2,np)';
yexact = x.^2 + x;  		// representing the underlying correlation
ynoise = yexact + noise;  // representing the measured data
M=[x x^2];			// rectangular matrix
Fitted_Constants=M\ynoise // matrix inversion yielding    0.9665752 and 1.0254576
plot(x,yexact,'k-'); plot(x,ynoise,'r.'); plot(x,M*Fitted_Constants,'g+','Markersize',15);xgrid;
xtitle('Linear Least Squares Fitting','x-values','y-values');
legend("Underlying Correlation", "Original Measured Statistical Data", "Fitted Data by Matrix Inversion", 2);

And it is fast, precise and efficient. And - again without any support functions - I can add a 95% confidence range arount the fit.

Why do so many Scilab tutorials obfuscate the issue by raising all sorts of functions?

E.g., why on earth would I want polyfit and horner  <http://www.openeering.com/sites/default/files/Polynomial_Interpolation.pdf>? See below.
Heinz


np = 100; noise = 0.7*(rand(np,1)-0.5);
x = linspace(0,2,np)';
yexact = x.^2 + x;
ynoise = yexact + noise;
// degree 1 approximation
p1 = polyfit(x, ynoise, 1);
p1val = horner(p1,x);
scf(10); clf(10);
plot(x,yexact,'k-'); plot(x,ynoise,'b-'); plot(x,p1val,'r-');








More information about the users mailing list