[Scilab-Dev] xcos blocks showing dynamic parameters

Len Remmerswaal lremmerswaal at revolutioncontrols.com
Thu Nov 29 14:48:39 CET 2012


Thanks for mentioning "displayedLabels":  that keyword allowed me to find the right stuff.

For those interested in a longer answer:

If you create a block on a palette using xcosPalAddBlock, you can give a "style" argument. If you want to create anything permanent (as in lasting over the current Scilab session), you must supply it. There are three options for "style":
- It can be a string containing the path to an image file, which is then projected on a standard (empty) block.
- Alternatively it can be a struct whose fieldnames are exactly the JGrapx style keys as on the xcosPalAddBlock help page. This gets to be converted to a string before being passed on.
- Lastly, you can provide exactly that string being passed on. This is a string with <key>=<value> items, separated by semicolons, like this:
	"noLabel=0;displayedLabel=Your text here;align=center;"
Note the final semicolon. Note also that a semicolon in any value is prohibited.

The style is applied to the familiar basic block, with a rounded rectangle and a nice gradient. 
If you want to write a text label on the block you need at least these styles:

noLabel=0;
This makes your label visible.

verticalLabelPosition=middle;
This puts your label inside the box instead of under it (peculiar default!)

displayedLabel=Your text here;
Or you will just be looking at the name of your defining interface function.

The text for displayedLabel has a few nice options:
- It is embedded in <html><body>......</body></html> so you can put some html codes in there.
  One of them is <BR>, if you want a multi-line label ("\n" will not work).
  Another is playing with font sizes and colors.
  You can even put in a small table.
  I am not sure how complete the html interpretation is. 
  And there cannot be a straight semicolon anywhere in your text!
- In the label text you can refer to the values that your interface function stored in the block.graphics.exprs list. The syntax is %n$s where n is an index into exprs.
Like this:
   displayedLabel=Game: %1$s - %2$s<BR>Score:<BR>Home: %5$s<BR>Guests: %6$s;
   where graphics.exprs might be ["BallTown", "BatCity", "0", "0", "2", "1"];
   which would produce:
Game: BallTown - BatCity
Score:
Home: 2
Guests: 1

Upon instantiation into a diagram, the block is sized to fit around the label. If you change values later (using the 'define' job of the interface function), it does not automatically resize to fit the new label.

Hope this is useful for anyone.
Cheers,
Len.

-----Original Message-----
Date: Tue, 27 Nov 2012 08:50:59 +0100
From: Cl?ment David <clement.david at scilab-enterprises.com>
To: List dedicated to development questions <dev at lists.scilab.org>
Subject: Re: [Scilab-Dev] xcos blocks showing dynamic parameters
Message-ID: <1354002659.1986.5.camel at paros>
Content-Type: text/plain; charset="UTF-8"

Hello,

In Xcos, we use images or text rendering to display the blocks icons.
These settings can be modified using a CSS-like key-value properties on xcosPalAddBlock. Take a look at the style argument.

Basically, to display the second exprs text on the block, use style="displayedLabel=%2$s". An exemple is provided on SCI/contrib/xcos_toolbox_skeleton.

Le lundi 26 novembre 2012 ? 16:12 +0100, Len Remmerswaal a ?crit :
> Hi all,
> I used to have scicos blocks that were able to show some parameters on their instantiations. This was how:
> - In the "define" case of the interface function: in the gr_i string to be displayed: reference some variable names.
> - In the "set" case of the interface function: assign new values to these variables and store them in the arg1.graphics.exprs list.
> - In the "plot" case of the interface function: unpack the same variables from arg1.graphics.exprs and call standard_draw.
>  
> Worked like a charm in scicos 4.3. Stopped working in Scilab 5.3.3.
>  
> I do see blocks like CONST_m displaying their parameter and changing them dynamically after assigning new parameter values. What I do not see is how it is done: I do not see, after the dialog is dismissed in scicos_getvalues, where the parameter gets to be displayed on the xcos diagram.
> Can anyone point me in the right direction?
> Thanks,
> Len Remmerswaal.
>  
> _______________________________________________
> dev mailing list
> dev at lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/dev

--
Cl?ment DAVID
Development Engineer / Account Manager
-----------------------------------------------------------
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Mobile: +33.6.26.26.51.90
Phone: +33.2.90.22.78.96
http://www.scilab-enterprises.com



------------------------------

Message: 3
Date: Tue, 27 Nov 2012 09:10:27 +0100
From: michael.baudin at contrib.scilab.org
To: <dev at lists.scilab.org>
Subject: Re: [Scilab-Dev] eigs woes
Message-ID: <1ed21a42ab0a3bea9a3898e13a875e09 at contrib.scilab.org>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi,

I think that you are discussing the code :

         if(~isreal(AMSB))
             Lup = umf_lufact(AMSB);
             [L, U, p, q, R] = umf_luget(Lup);
             R = diag(R);
             P = zeros(nA, nA);
             Q = zeros(nA, nA);
             for i = 1:nA
                 P(i,p(i)) = 1;
                 Q(q(i),i) = 1;
             end
             umf_ludel(Lup);
         else
             [hand, rk] = lufact(AMSB);
             [P, L, U, Q] = luget(hand);
             ludel(hand);
         end

extracted from eigs.

The lufact function is designed to be used with lusolve.
The luget function was designed essentially for debugging purposes,
or to communicate with other algorithms.
In general, it should not be used to compute the solution.

The other problem is the for loop, which should be avoided, because
with large sparse matrices, this will fail for performance reasons.
I'm not sure, but this loop should be vectorisable quite easily.

In general, the umf functions are much faster, as shown by B. Pin?on.
But there might another technical reason here that I do not see.

Best regards,

Micha?l

Le 2012-11-26 05:53, Guillaume Horel a ?crit?:
> This is a rather long email about my fight with the eigs function in
> scilab. It might be better suited for a bug report, but I wanted to
> try out this list first.
>
> It's a boundary problem for the helmholtz equation:
>  http://bpaste.net/show/60377/ [1]
>
> On scilab-5.4.0, the code fails with the following error message:
>
> eigs: Impossible to invert complex sparse matrix.
> at line???? 333 of function speigs called by :?
>  at line???? 112 of function eigs called by :?
> [D V] = eigs(A, [], M2,'SM');
> at line????? 54 of exec file called by :???
> exec('/home/guillaume/test-eigs.sce', -1)
>
> In the code of the eigs function, it turns out that there is a test
> to check if the factors of the LU decomposition of A-sigma I are
> complex (which is the majority of cases if you start from a complex
> matrix), and the code fails with this error. Is there any reason to
> have such a test in there?
>  I also realized that the code was computing the inverse by actually
> calling inv(L) and inv(U), which completely defeats the purpose of
> doing a LU decomposition. Long story short, the following patch fixes
> the two aforementionned issues: http://bpaste.net/show/60375/ [2]
>
> I still have two questions:
> - I'm still unsure about why the code needs two different lu solvers:
> lufact and mf_lufact. Unless lufact is really faster than umf_lufact
> for real numbers, I think that just using umf_lufact should be enough
> and would further simplify the code.
>  - after applying this patch scilab computes my eigenfunctions fine.
> However the performance is pretty disappointing. On my laptop, scilab
> takes around 15s to compute 100 eigenvalues, but on the same machine
> octave takes less than 3s. I checked the octave eigs function and 
> it's
> all written in C. However I would think the computation time is
> dominated by the calls to the various arpack functions and lu solvers
> rather than all the plumbing, but maybe I'm wrong... Any thoughts?
>
> Thanks for your insights,
> Guillaume
>
>
> Links:
> ------
> [1] http://bpaste.net/show/60377/
> [2] http://bpaste.net/show/60375/
>
> _______________________________________________
> dev mailing list
> dev at lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/dev



------------------------------

Message: 4
Date: Tue, 27 Nov 2012 09:32:45 +0100
From: michael.baudin at contrib.scilab.org
To: <dev at lists.scilab.org>
Subject: Re: [Scilab-Dev] Fwd: Optimization
Message-ID: <8ab94e8b073ee2ef4f60622d71f6fed3 at contrib.scilab.org>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi,

One particular point that you may not be aware of is that
the unconstrained and the constrained algorithms are different.
This is not one single, constrainted, algorithm which is used either
with bounds or with infinite bounds: these are two different
routines.
This is why setting infinite bounds such as [-inf,inf] may not have any 
numerical
interest: the unconstrained version should be used in this case.

In your message, you state that "all components are stricly in [0,1]",
but I can see that 1.04>1, so that if the upper bound is set to 1, this
bound will be active.
On the other hand, if the bounds were not active (but they are), the 
difference
between the unconstrained and the constrained algorithms should be
insignificant.

So, my guess is that the reason why the LBFGSB0inf algorithm has a 
nonzero
gradient is because some constraint is active, which prevents the
algorithm to converge to an unconstrained minimum :
it has to respect some bound.

My own experience with the "gc" algorithm is that there are very
few cases in which this algorithm outperforms the default "qn" 
algorithm.
More precisely, the "gc" algo. is designed to manage large problems,
but I was not able to create a problem where "qn" fails because of a 
lack of
memory, and were "gc" succeeds.
I guess that this is because the implementation in Scilab uses the 
maximum
available memory to compute the number of vectors in the L-BFGS algo.
In practice, if "qn" fails because it has not enough memory,
"gc" also fails, either because it requires more memory than Scilab can
give, or because it does not converge at all.

Best regards,

Micha?l

PS
A document which might be of some interest to you is "Optimization in 
Scilab",
where the chapter 2 is on nonlinear optimization :

http://forge.scilab.org/index.php/p/docoptimscilab/downloads/



Le 2012-11-16 00:34, Jean-Pierre Dussault a ?crit?:
> Reposted with pdf figure instead of too big scg
>
>  JPD
>
>  -------- Message original --------
>
>  		SUJET:
>  		Optimization
>
>  		DATE?:
>  		Thu, 15 Nov 2012 18:27:47 -0500
>
>  		DE?:
>  		Jean-Pierre Dussault <Jean-Pierre.Dussault at Usherbrooke.CA>
>
>  		POUR?:
>  		dev at lists.scilab.org
>
>  Hi all,
>
>  I am preparing examples for an optimization course for students in
> image science. I use an example from
> 
> http://www.ceremade.dauphine.fr/~peyre/numerical-tour/tours/optim_1_gradient_descent/
> [1] to promote the use of better algorithms than the simple gradient
> descent.
>
>  I attach the convergence plot of the norm of the gradient for 5
> variants of the optim command: gc unconstrained, gc with bounds
> [-%inf,%inf], gc with bounds [0,1], gc with bounds [0,%inf] and nd. I
> also include the gradient descent.
>
>  Except for the [0,%inf] variant, the solution has all components
> strictly in [0,1] as displayed here:
>
>> 
>> -->[max(xoptS),max(xoptGC),max(xoptGCB),max(xoptGCBinf),max(xoptGCB0inf),max(xoptND)]
>> ans =
>>
>> 0.9249840 0.9211455 0.9216067 0.9213056 1.0402906 0.9212348
>>
>> 
>> -->[min(xoptS),min(xoptGC),min(xoptGCB),min(xoptGCBinf),min(xoptGCB0inf),min(xoptND)]
>> ans =
>>
>> 0.0671743 0.0718204 0.0678885 0.0714951 0.0772300 0.0714255
>  On the convergence plot, we clearly see that the gradient norm of
> the gc with [0,1] bounds stalls away from zero while with no bounds 
> or
> infinite bounds, it converges to zero. This is even more severe for
> the variant with bounds [0.%inf], which no more approaches the
> solution, making virtually no progress at all after some 30 function
> evaluations.
>
>  Is it a Scilab bug or a bad example for the gcbd underlying routine?
> The cost function is strongly convex of dimension 65536. Has someone
> experienced a similar behavior?
>
>  This is unfortunate since I wish to convince my students to use
> suitably constrained models instead of enforcing constraints
> afterward.
>
>  Thanks for any suggestion to work around this troublesome situation.
>
>  JPD
>
>
>
> Links:
> ------
> [1]
> 
> http://www.ceremade.dauphine.fr/%7Epeyre/numerical-tour/tours/optim_1_gradient_descent/
>
> _______________________________________________
> dev mailing list
> dev at lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/dev


------------------------------

_______________________________________________
dev mailing list
dev at lists.scilab.org
http://lists.scilab.org/mailman/listinfo/dev


End of dev Digest, Vol 4, Issue 14
**********************************





More information about the dev mailing list