[Scilab-users] Ways to speed up simple things in Scilab ?

Antoine Monmayrant antoine.monmayrant at laas.fr
Fri Apr 24 13:22:24 CEST 2015


Le 04/24/2015 10:20 AM, Stéphane Mottelet a écrit :
> Hello,
>
> Ok, I will try dynamic link since I want the project to stay into the 
> Scilab world, but another alternative is Julia 
> (http://julialang.org/). I have made some tests this night and got a 
> speedup of 25, the explanation is JIT compilation. Maybe will we have 
> JIT compilation in Scilab 6 ?
Hi Stéphane,

JIT is not the only explanation for the speedup.
(I also did some scilab->julia conversion earlier this year to see how 
much we could gain).
It seems to me that your code is not particularly efficient for scilab, 
while it might be a bit less inefficient in julia.
For example:
  - you are indexing a lot, which is not particularly fast in scilab.
  - you are re-indexing several times: 
...,v(17),v(104),v(149),...,v(17),v(104),v(149),... -> 
part_v=[v(17),v(104),v(149)] ; ...,part_v,...,part_v, could be faster.
  - you might need to pre-allocate M1_v : 
M1_v=zeros(whatever_size_it_is) so that scilab won't have to re-allocate 
it while you make it grow by stacking scalar after scalar.
  - more generally, it seems your code is highly redundant, couldn't you 
save some time by reusing some precalculated vector bit, like 
[v(17),v(104),v(149)] ?

Hope it helps,

Antoine
>
> Best regards,
>
> S.
>
> Le 24/04/2015 09:30, aweeks at hidglobal.com a écrit :
>> Hello Stephane,
>>
>> We have a Scilab program which performs a  numerical integration on 
>> data points in 3-dimensions - it has two nested loops.  When the 
>> number of data points was large this was slow so we implemented the 
>> calculation function in C and got a speed improvement of about 24 
>> times !
>>
>> We also found three other improvements:
>>
>>         using pointer arithmetic was faster than 'for' loops,
>>         'pow(x, 2)' was faster than x*x,
>>         handling the data as 3 (N x 1) vectors was faster than using 
>> 1 (N x 3) matrix.
>>
>> each of these giving something like a 3-4% improvement - small 
>> compared to x24 but still worth having.
>>
>> If you don't mind tackling the dynamic linking it's probably worth 
>> the effort if you'll use this program a few times - good luck.
>>
>> Adrian.
>>
>> *Adrian Weeks *
>> Development Engineer, Hardware Engineering EMEA
>> Office: +44 (0)2920 528500 | Desk: +44 (0)2920 528523 | Fax: +44 
>> (0)2920 520178_
>> __aweeks at hidglobal.com_ <mailto:aweeks at hidglobal.com>
>> HID Global Logo <http://www.hidglobal.com/>
>> Unit 3, Cae Gwyrdd,
>> Green meadow Springs,
>> Cardiff, UK,
>> CF15 7AB._
>> __www.hidglobal.com_ <http://www.hidglobal.com/>
>>
>>
>>
>>
>>
>> From: 	Stéphane Mottelet <stephane.mottelet at utc.fr>
>> To: 	"International users mailing list for Scilab." 
>> <users at lists.scilab.org>
>> Date: 	23/04/2015 22:52
>> Subject: 	[Scilab-users] Ways to speed up simple things in Scilab ?
>> Sent by: 	"users" <users-bounces at lists.scilab.org>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>>
>> Hello,
>>
>> I am currently working on a project where Scilab code is automatically
>> generated, and after many code optimization, the remaining bottleneck is
>> the time that Scilab spends to execute simple code like this (full
>> script (where the vector has 839 lines) with timings is attached) :
>>
>> M1_v=[v(17)
>> v(104)
>> v(149)
>> -(v(18)+v(63)+v(103))
>> -(v(18)+v(63)+v(103))
>> v(17)
>> ...
>> v(104)
>> v(149)
>> ]
>>
>> This kind of large vectors are the used to build a sparse matrix each
>> time the vector v changes, but with a constant sparsity pattern.
>> Actually, the time spent by Scilab in the statement
>>
>> M1=sparse(M1_ij,M1_v,[n1,n2])
>>
>> is negligible compared to the time spent to build f M1_v...
>>
>> I have also noticed that if you need to define such a matrix with more
>> that one column, the time elapsed is not linear with respect to the
>> number of columns: typically 4 times slower for 2 columns. In fact the
>> statement
>>
>> v=[1 1
>> ...
>> 1000 1000]
>>
>> is even two times slower than
>>
>> v1=[1
>> ...
>> 1000];
>> v2=[1
>> ....
>> 1000];
>> v=[v1 v2];
>>
>> So my question to users who have the experience of dynamic link of user
>> code : do you think that using dynamic link of compiled generated C code
>> could improve the timings ?
>>
>> In advance, thanks for your help !
>>
>> S.
>>
>>
>> [attachment "test.sce" deleted by Adrian Weeks/CWL/EU/ITG] 
>> _______________________________________________
>> users mailing list
>> users at lists.scilab.org
>> http://lists.scilab.org/mailman/listinfo/users
>>
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users at lists.scilab.org
>> http://lists.scilab.org/mailman/listinfo/users
>
>
> -- 
> Département de Génie Informatique
> EA 4297 Transformations Intégrées de la Matière Renouvelable
> Université de Technologie de Compiègne -  CS 60319
> 60203 Compiègne cedex
>
>
> _______________________________________________
> users mailing list
> users at lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.scilab.org/pipermail/users/attachments/20150424/d35b3b9d/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 8425 bytes
Desc: not available
URL: <https://lists.scilab.org/pipermail/users/attachments/20150424/d35b3b9d/attachment.gif>


More information about the users mailing list