[Scilab-users] Large variables and execution speeds

Tim Wescott tim at wescottdesign.com
Tue Feb 24 22:38:40 CET 2015


I have an algorithm that I'm working on that involves having large data
sets, which I'm currently representing as tlists.  Due to the
constraints of the algorithm, I'm doing many calls that are more or less
of the form:

my_tlist = some_function(my_tlist);

The intent is to get the same effect that I would get if I were in C or
C++, and wrote:

some_function(& my_structure);

or

my_class.some_function();

It appears, from the significant loss of execution speed when I do this,
that Scilab is copying the results of the function into the "my_tlist"
variable byte by byte.

At this writing, the only way that I can see to fix this is to invoke
the function as:

some_function("my_tlist");

and then wherever I modify data have use an exec function, i.e., replace

local_tlist.some_field = stuff;

with

exec(msprintf("%s = stuff", local_tlist_name));

This seems clunky in the extreme.

Is there another way to do something like this that doesn't force Scilab
to copy large chunks of data needlessly, but allows me to operate on
multiple copies of similar tlists?

Thanks.

-- 

Tim Wescott
www.wescottdesign.com
Control & Communications systems, circuit & software design.
Phone: 503.631.7815
Cell:  503.349.8432




More information about the users mailing list