From clement.david at scilab-enterprises.com Mon Feb 2 09:13:32 2015 From: clement.david at scilab-enterprises.com (=?ISO-8859-1?Q?Cl=E9ment?= David) Date: Mon, 02 Feb 2015 09:13:32 +0100 Subject: [Scilab-users] [Fwd: [Swig-devel] Announce - swig-3.0.5] SWIG now support Scilab Message-ID: <1422864812.2208.5.camel@scilab-enterprises.com> Hello all, For people interested in mapping C / C++ functions to Scilab, SWIG can now be used to generate the Scilab gateways. Do not hesitate to test and report bugs ! -- Cl?ment -------------- next part -------------- An embedded message was scrubbed... From: William S Fulton Subject: [Swig-devel] Announce - swig-3.0.5 Date: Sun, 1 Feb 2015 01:06:11 +0000 Size: 5951 URL: From posti85 at o2.pl Tue Feb 3 19:35:02 2015 From: posti85 at o2.pl (=?UTF-8?Q?Pawe=C5=82_Postek?=) Date: Tue, 03 Feb 2015 19:35:02 +0100 Subject: [Scilab-users] =?utf-8?q?Help_about_Scilab?= Message-ID: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> Hello. My name is Pawel and i have a big problem. I have a code in matlab witch use nchoosek function. for n = 5 for m = (1:n) c = nchoosek (n,m) end end I need to convert this code to Scilab. I'm trying to do this, but I fail. If anyone could help me? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge.steer at inria.fr Wed Feb 4 09:37:58 2015 From: serge.steer at inria.fr (Serge Steer) Date: Wed, 4 Feb 2015 09:37:58 +0100 (CET) Subject: [Scilab-users] Help about Scilab In-Reply-To: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> References: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> Message-ID: <1803653643.37708417.1423039078280.JavaMail.zimbra@inria.fr> please see http://svn.forge.scilab.org/docintrodiscrprobas/en_US/introdiscreteprobas/scripts/nchoosek.sci Serge Steer ----- Mail original ----- > De: "Pawe? Postek" > ?: users at lists.scilab.org > Envoy?: Mardi 3 F?vrier 2015 19:35:02 > Objet: [Scilab-users] Help about Scilab > Hello. > My name is Pawel and i have a big problem. I have a code in matlab witch use > nchoosek function. > for n = 5 > for m = (1:n) > c = nchoosek (n,m) > end > end > I need to convert this code to Scilab. > I'm trying to do this, but I fail. > If anyone could help me? > Thank you > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Christophe.Dang at sidel.com Wed Feb 4 09:48:57 2015 From: Christophe.Dang at sidel.com (Dang, Christophe) Date: Wed, 4 Feb 2015 08:48:57 +0000 Subject: [Scilab-users] Help about Scilab In-Reply-To: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> References: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> Message-ID: Hello, > De : Pawel Postek > Envoy? : mardi 3 f?vrier 2015 19:35 > > c = nchoosek (n,m) > I need to convert this code to Scilab. > I'm trying to do this, but I fail. The Mathworks page tells me that nchoosek () computes the binomial coefficient. I suggest you read /Introduction to discrete probabilities in Scilab/ http://www.scilab.org/content/download/248/1706/file/introdiscreteprobas.pdf section 2.10 " Computing combinations with Scilab" (p. 24-25) says "There is no Scilab function to compute the binomial number" And give the following solution : // combinations -- // Returns the number of combinations of j objects chosen from n objects . function c = combinations ( n , j ) c = exp( gammaln (n+1) - gammaln (j+1) - gammaln (n-j +1)); // If the input where integers , returns also an integer . if ( and( round (n)==n) & and( round (j)==j) ) then b = round ( b ) end endfunction Best regards -- Christophe Dang Ngoc Chan Mechanical calculation engineer This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error), please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From pierre-aime.agnel at scilab-enterprises.com Wed Feb 4 12:07:11 2015 From: pierre-aime.agnel at scilab-enterprises.com (=?UTF-8?B?UGllcnJlLUFpbcOpIEFnbmVs?=) Date: Wed, 04 Feb 2015 12:07:11 +0100 Subject: [Scilab-users] Help about Scilab In-Reply-To: <1803653643.37708417.1423039078280.JavaMail.zimbra@inria.fr> References: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> <1803653643.37708417.1423039078280.JavaMail.zimbra@inria.fr> Message-ID: <54D1FD5F.8080803@scilab-enterprises.com> Hello, You might be interested in installing the "specfun " ATOMS module( atomsInstall('specfun') The implementation given by Serge is inside the module and can be called with n = 5; m = 0:n; c = specfun_nchoosek(n, m) You will obtain all the binomial coefficientsin a row vector. Once loaded you can check other helpful functions with help("Specfun Toolbox") Best, Le 04/02/2015 09:37, Serge Steer a ?crit : > please see > http://svn.forge.scilab.org/docintrodiscrprobas/en_US/introdiscreteprobas/scripts/nchoosek.sci > Serge Steer > > ------------------------------------------------------------------------ > > *De: *"Pawe? Postek" > *?: *users at lists.scilab.org > *Envoy?: *Mardi 3 F?vrier 2015 19:35:02 > *Objet: *[Scilab-users] Help about Scilab > > Hello. > > My name is Pawel and i have a big problem. I have a code in matlab > witch use nchoosek function. > > for n = 5 > for m = (1:n) > c = nchoosek (n,m) > end > end > > I need to convert this code to Scilab. > I'm trying to do this, but I fail. > If anyone could help me? > > Thank you > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > > > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users -- Pierre-Aim? Agnel R&D Projects Manager Phone: +33.1.80.77.04.67 Mobile: +33.6.82.49.35.23 ----------------------------------------------------------- Scilab Enterprises 143bis rue Yves Le Coz - 78000 Versailles, France Phone: +33.1.80.77.04.68 http://www.scilab-enterprises.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgougeon at free.fr Thu Feb 5 01:17:25 2015 From: sgougeon at free.fr (Samuel Gougeon) Date: Thu, 05 Feb 2015 01:17:25 +0100 Subject: [Scilab-users] Help about Scilab In-Reply-To: <54D1FD5F.8080803@scilab-enterprises.com> References: <2bd432a8.6d85e034.54d114d6.d4f6c@o2.pl> <1803653643.37708417.1423039078280.JavaMail.zimbra@inria.fr> <54D1FD5F.8080803@scilab-enterprises.com> Message-ID: <54D2B695.7030700@free.fr> Hello, Le 04/02/2015 12:07, Pierre-Aim? Agnel a ?crit : > Hello, > > You might be interested in installing the "specfun > " ATOMS module( > > atomsInstall('specfun') > > The implementation given by Serge is inside the module and can be > called with > n = 5; > m = 0:n; > c = specfun_nchoosek(n, m) > > You will obtain all the binomial coefficientsin a row vector. > > Once loaded you can check other helpful functions with > help("Specfun Toolbox") > > Best, As stated in the long-discussed bug report from which this nchoosek has been designed, this function should be transfered in Scilab. It is a common function available on almost all basic pocket calculators. It is somewhat a non-sense to have to load an external module to get it. Best regards Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjaminr1800 at gmail.com Thu Feb 5 00:17:43 2015 From: benjaminr1800 at gmail.com (Benjamin Rentschler) Date: Wed, 4 Feb 2015 17:17:43 -0600 Subject: [Scilab-users] License agreement question Message-ID: To Whom it may Concern, I was reading through the Scilab licensing agreement for version 5.5.1 and I came across the statement below. "In this respect, the risks associated with loading, using, modifying and/or developing or reproducing the software by the user are brought to the user's attention, given its Free Software status, which may make it complicated to use, with the result that its use is reserved for developers and experienced professionals having in-depth computer knowledge." Normally I would not consider myself to be a developer or an experienced computer professional with in depth computer knowledge. However, since I would like to use Scilab to assist me with projects, I was wondering if I fit under this definition as stated in the agreement. Thanks for you time, Benjamin From Christophe.Dang at sidel.com Fri Feb 6 15:09:04 2015 From: Christophe.Dang at sidel.com (Dang, Christophe) Date: Fri, 6 Feb 2015 14:09:04 +0000 Subject: [Scilab-users] License agreement question In-Reply-To: References: Message-ID: Hello, > De : Benjamin Rentschler > Envoy? : jeudi 5 f?vrier 2015 00:18 > > [Scilab licensing agreement for version 5.5.1] > " the risks associated with loading, using, modifying and/or developing or reproducing the > [...]its use is reserved for developers and experienced professionals [...]." > > I was wondering if I fit under this definition as stated in the agreement. I considere it as a disclaimer which means "the Scilab Consortium is not responsible of the quality of your results, don't sue us if you design a car with the help of Scilab and your car crashes (or worse)". -- Christophe Dang Ngoc Chan Mechanical calculation engineer This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error), please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From tim at wescottdesign.com Fri Feb 6 19:19:55 2015 From: tim at wescottdesign.com (Tim Wescott) Date: Fri, 06 Feb 2015 10:19:55 -0800 Subject: [Scilab-users] License agreement question In-Reply-To: References: Message-ID: <1423246795.2533.26.camel@servo> On Wed, 2015-02-04 at 17:17 -0600, Benjamin Rentschler wrote: > To Whom it may Concern, > > I was reading through the Scilab licensing agreement for version 5.5.1 > and I came across the statement below. > > "In this respect, the risks associated with loading, using, modifying > and/or developing or reproducing the software by the user are brought to > the user's attention, given its Free Software status, which may make it > complicated to use, with the result that its use is reserved for > developers and experienced professionals having in-depth computer > knowledge." > > Normally I would not consider myself to be a developer or an > experienced computer professional with in depth computer knowledge. > However, since I would like to use Scilab to assist me with projects, > I was wondering if I fit under this definition as stated in the > agreement. It reads to me like a legally defensible way of saying "you're on your own, bub". I'm pretty active on the comp.dsp newsgroup on USENET, and there seems to be a small but significant minority of people -- either students or young practitioners -- who cannot seem to distinguish between true knowledge of DSP and what commands to type into Matlab (Scilab, too, roughly in proportion to its installed base compared to Matlab). Some of these people get seriously bent out of shape when you tell them that they need to apply a pencil to paper, and to guide that pencil with deep thought and consideration, rather than just picking the right Matlab command to cough up an answer. That clause reads to me like an answer to those folks. Any tool that you use should be treated as such, and for anything really important you should do the engineering from at least two directions (this is why development for life-critical systems is slow and expensive). So no matter what the tool is, whether it be Scilab, Maxima, or your brain and a pencil, you shouldn't just trust the tool to cough up the right answer: I think that's what they're trying to say. -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 From quantparis at numericable.fr Fri Feb 6 20:38:07 2015 From: quantparis at numericable.fr (quantparis at numericable.fr) Date: Fri, 6 Feb 2015 20:38:07 +0100 (CET) Subject: [Scilab-users] atoms install doesn't work Message-ID: Hello I am trying to install some modules with atomsInstall and it hasn't worked when I type and atomsSystemUpdate I get the same message as for the module (cf. below) work on ubuntu 14.04,scilab 5.6 thank you in advance for help,suggestions pascal -->atomsSystemUpdate() atomsDownload: The following file hasn't been downloaded: ?????? ??- URL?????????? : 'http://atoms.scilab.org/5.6/TOOLBOXES/64/linux.gz' ?????? ??- Local location : '/tmp/SCI_TMP_3832_fAs08t/.atoms/1_TOOLBOXES.gz' WARNING: --2015-02-06 20:34:05--?? http://atoms.scilab.org/5.6/TOOLBOXES/64/linux.gz ???????????????? Resolving atoms.scilab.org (atoms.scilab.org)... 193.51.192.153 ???????????????? Connecting to atoms.scilab.org (atoms.scilab.org)|193.51.192.153|:80... connected. ???????????????? HTTP request sent, awaiting response... 404 Not Found ???????????????? 2015-02-06 20:34:06 ERROR 404: Not Found. Scanning repository http://atoms.scilab.org/5.6 ... Skipped ??!--error 10000 All ATOMS repositories scan failed. at line???????? 198 of function atomsDESCRIPTIONget called by :?? at line?????????? 16 of function atomsSystemUpdate called by :?? atomsSystemUpdate() ?? ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From communication at scilab-enterprises.com Tue Feb 10 10:42:12 2015 From: communication at scilab-enterprises.com (Scilab Communications) Date: Tue, 10 Feb 2015 10:42:12 +0100 Subject: [Scilab-users] [ScilabTEC 2015] Program Unveiled and Registration Open Message-ID: <54D9D274.1010504@scilab-enterprises.com> Is this email not displaying correctly? View it in your browser ScilabTEC2015 ------------------------------------------------------------------------ ScilabTEC 2015 Program Unveiled and Registration Open *2 keynote speakers:* - Roberto Di Cosmo, director of IRILL (French research structure dedicated to Free and Open Source Software quality), - Yohan Livet, software architect at CEA/CESTA (French Atomic Energy and Alternative Energies Commission). and also high quality presentations and innovative applications in various fields with the interventions of Inmetro, Evidence Srl, Inria, Noesis Solutions, Technische Universit?t M?nchen, Xilinx, Bavarian State Research Center for Agriculture, KIT, CNES, University of Luxembourg, Embedded Solutions, Silkan, Sanofi, LASTIMI Laboratory, Temasek Polytechnic, Johnson Electric Gate. Discover the complete program and register on http://www.scilabtec.com/ The first evening, participants may meet over dinner with the organizers during a moment of conviviality and exchange between all. REGISTER NOW (limited seating) Early-bird before March 10, 2015 Scilab Enterprises ------------------------------------------------------------------------ Communication Department, Scilab Enterprises | communication at scilab-enterprises.com 143bis rue Yves Le Coz - 78000 Versailles | www.scilab-enterprises.com - www.scilab.org Unsubscribe scilabtec -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 23692 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10691 bytes Desc: not available URL: From hmmorae at gmail.com Wed Feb 11 16:48:18 2015 From: hmmorae at gmail.com (Hector Mora) Date: Wed, 11 Feb 2015 10:48:18 -0500 Subject: [Scilab-users] linux command line Message-ID: Hi all: I would like to give, in Scilab, a linux command line. It's possible? For example mv file1.txt file2.txt Best regards Hector -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bignier at scilab-enterprises.com Wed Feb 11 16:54:53 2015 From: paul.bignier at scilab-enterprises.com (Paul Bignier) Date: Wed, 11 Feb 2015 16:54:53 +0100 Subject: [Scilab-users] linux command line In-Reply-To: References: Message-ID: <54DB7B4D.8080000@scilab-enterprises.com> Hello Hector, The File section will give you the Scilab linux commands that you are looking for :) Otherwise, use the unix function to make your own command line. Regards, Paul On 02/11/2015 04:48 PM, Hector Mora wrote: > Hi all: > > I would like to give, in Scilab, a linux command line. It's possible? > > For example mv file1.txt file2.txt > > Best regards > > Hector > > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users -- Paul BIGNIER Development engineer ----------------------------------------------------------- Scilab Enterprises 143bis rue Yves Le Coz - 78000 Versailles, France Phone: +33.1.80.77.04.69 http://www.scilab-enterprises.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgougeon at free.fr Wed Feb 11 17:48:55 2015 From: sgougeon at free.fr (sgougeon at free.fr) Date: Wed, 11 Feb 2015 17:48:55 +0100 (CET) Subject: [Scilab-users] linux command line In-Reply-To: Message-ID: <1831192031.366541834.1423673335776.JavaMail.root@zimbra75-e12.priv.proxad.net> Hello, >I would like to give, in Scilab, a linux command line. It's possible? > >For example mv file1.txt file2.txt yes, you may use one of the unix..() or host() functions: http://help.scilab.org/docs/5.5.1/en_US/section_7182261dbbb2bb2293bb9166ba5f1fb3.html Regards Samuel From bmbouter at gmail.com Thu Feb 12 16:56:21 2015 From: bmbouter at gmail.com (Brian Bouterse) Date: Thu, 12 Feb 2015 10:56:21 -0500 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting Message-ID: I use GROCER's ms_var function to estimate a single variable VAR model, and it estimates parameters as expected and described by the manual. I want to train and evaluate my model on different data sets to avoid bias from training and benchmarking on the same data set. How can this be done? For example consider data set A (month 1) and data set B (month 2) from a 2 month sample. I would like to train on month 1 and then benchmark on month 2. I use ms_var to train on data set A. It gives me estimated parameters and filtered regime probabilities. That works well. How can I use the trained parameters to then estimate on month 2 data? I'm aware of the ms_forecast function, but it seems to only forecast using the results from an estimator like ms_var(). The forecasting will then only be done on the same data as was used for estimating. I want to use the trained parameters to product estimates for a different data set. Thanks in advance. I really appreciate being able to use this software. -Brian -- Brian Bouterse -------------- next part -------------- An HTML attachment was scrubbed... URL: From grocer.toolbox at gmail.com Thu Feb 12 21:44:19 2015 From: grocer.toolbox at gmail.com (Eric Dubois) Date: Thu, 12 Feb 2015 21:44:19 +0100 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting In-Reply-To: References: Message-ID: Dear Brian. If I have well understood, you want: - to estimate a ms_var model on a subset of your dataset; - recover the estimated parameters; - and calculate the filtered state probabilities on the other part of your dataset with these parameters. This can be done: - the function MSVAR_Filt calculates among other the filetered probabilities (5th output); - the function needs among other things the parameters of the model; they can be recovered from the output tlist of function ms_var; if give it the name res (with --> res=ms_var(...)): this is the field 'coeff' in the output tlist (res('coeff') with this example); But the function MSVAR_Filt also has to be fed with matrices y_hat, x_hat and z_hat that are matrices derived from the matrix of endogenous and exogenous variables (see function ms_var to see how it is done). If you are not too in a hurry, I can write the function that gathers all these operations within a few weeks. ?ric. 2015-02-12 16:56 GMT+01:00 Brian Bouterse : > I use GROCER's ms_var function to estimate a single variable VAR model, > and it estimates parameters as expected and described by the manual. I want > to train and evaluate my model on different data sets to avoid bias from > training and benchmarking on the same data set. How can this be done? > > For example consider data set A (month 1) and data set B (month 2) from a > 2 month sample. I would like to train on month 1 and then benchmark on > month 2. > > I use ms_var to train on data set A. It gives me estimated parameters and > filtered regime probabilities. That works well. How can I use the trained > parameters to then estimate on month 2 data? > > I'm aware of the ms_forecast function, but it seems to only forecast using > the results from an estimator like ms_var(). The forecasting will then only > be done on the same data as was used for estimating. I want to use the > trained parameters to product estimates for a different data set. > > Thanks in advance. I really appreciate being able to use this software. > > -Brian > > -- > Brian Bouterse > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.strelkov at gmail.com Sat Feb 14 18:23:47 2015 From: n.strelkov at gmail.com (nikolay) Date: Sat, 14 Feb 2015 10:23:47 -0700 (MST) Subject: [Scilab-users] Problem obtaining Dirac delta function in Xcos Message-ID: <1423934627227-4031722.post@n3.nabble.com> Hello Scilab users! I have a problem with simple Xcos diagram. I'm trying to get Dirac delta function as a derivative of Step function. Xcos diagram is in attachment ( dirac_problem.zcos ). I use the following blocks: CLOCK_c ENDBLK (final simulation time is 2) CLINDUMMY_f STEP_FUNCTION (step time 1.0, initial value 0, final value 1) DERIV DIFF_f scifunc_block_m (with y1=diff(u1) function inside) CMSCOPE 3 TOWS_c for saving results to workspace (step, dirac1 and dirac3). If I manually do diff(step) I get expected Dirac function. If I replace STEP_FUNCTION with RAMP (slope 1, start time 1, initial value 0) I get expected results - plot of Heaviside function (see ramp.zcos ). Can anybody help me? What is the correct way to obtain Dirac delta function in XCos? With best regards, maintainer and developer of Mathieu functions toolbox for Scilab, IEEE member, Ph.D., Nikolay Strelkov. -- View this message in context: http://mailinglists.scilab.org/Problem-obtaining-Dirac-delta-function-in-Xcos-tp4031722.html Sent from the Scilab users - Mailing Lists Archives mailing list archive at Nabble.com. From quantparis at numericable.fr Sat Feb 14 21:37:19 2015 From: quantparis at numericable.fr (quantparis at numericable.fr) Date: Sat, 14 Feb 2015 21:37:19 +0100 (CET) Subject: [Scilab-users] compatibility Atoms scilab 5.6 / Ubuntu14.04 Message-ID: Hello I have recently installed scilab 5.6 on ubuntu 14.04 but I get the following message with Atoms: No ATOMS module is available. Please, check your Internet connection or make sure that your OS is compatible with ATOMS. is there anyone who has run Atoms from scilab 5.6 on ubuntu 14.04? thank you in advance for the feedback Pascal -------------- next part -------------- An HTML attachment was scrubbed... URL: From arvid at softube.com Sun Feb 15 12:01:46 2015 From: arvid at softube.com (=?utf-8?Q?Arvid_Ros=C3=A9n?=) Date: Sun, 15 Feb 2015 12:01:46 +0100 Subject: [Scilab-users] Parallel execution on Mac Message-ID: <2B62F3E8-30D2-485D-A861-93ECB82C9EEF@softube.com> Anyone out there who is using any successful ways of running heavy scilab jobs in parallel across several cores on Mac OS? 8 cores, with 1 in use is, is a bit annoying when you are sitting there waiting for the processing to finish. Cheers, Arvid From pablo_f_7 at hotmail.com Sun Feb 15 15:37:00 2015 From: pablo_f_7 at hotmail.com (Pablo Fonovich) Date: Sun, 15 Feb 2015 11:37:00 -0300 Subject: [Scilab-users] compatibility Atoms scilab 5.6 / Ubuntu14.04 In-Reply-To: References: Message-ID: Hi... i assume you reffer to the yasp version? i mean... scilab 6.0 current development? i think it tries to look for scilab 6.0 repossitory for atoms, but it is not yet available... so i think you coud just use: atomsRepositoryAdd("http://atoms.scilab.org/5.5") and use the previous repository... when version 6.0 stable cames out, i think the 6.0 repository will be available... The problem of this is that i cant remove 6.0 repo with atomsRepositoryDel("http://atoms.scilab.org/6.0 official")... and this brings trouble... perhaps you should just wait till this is solved in the new version From: quantparis at numericable.fr To: users at lists.scilab.org Date: Sat, 14 Feb 2015 21:37:19 +0100 Subject: [Scilab-users] compatibility Atoms scilab 5.6 / Ubuntu14.04 Hello I have recently installed scilab 5.6 on ubuntu 14.04 but I get the following message with Atoms: No ATOMS module is available. Please, check your Internet connection or make sure that your OS is compatible with ATOMS. is there anyone who has run Atoms from scilab 5.6 on ubuntu 14.04? thank you in advance for the feedback Pascal _______________________________________________ users mailing list users at lists.scilab.org http://lists.scilab.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane.mottelet at utc.fr Sun Feb 15 15:33:21 2015 From: stephane.mottelet at utc.fr (=?ISO-8859-1?Q?St=E9phane_Mottelet?=) Date: Sun, 15 Feb 2015 15:33:21 +0100 Subject: [Scilab-users] Parallel execution on Mac In-Reply-To: <2B62F3E8-30D2-485D-A861-93ECB82C9EEF@softube.com> References: <2B62F3E8-30D2-485D-A861-93ECB82C9EEF@softube.com> Message-ID: <54E0AE31.6070403@utc.fr> Le 15/02/2015 12:01, Arvid Ros?n a ?crit : > Anyone out there who is using any successful ways of running heavy scilab jobs in parallel across several cores on Mac OS? > > 8 cores, with 1 in use is, is a bit annoying when you are sitting there waiting for the processing to finish. > > Cheers, > Arvid > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users Hello, The problem is that parallel_run is still broken under MacOSX (see http://bugzilla.scilab.org/show_bug.cgi?id=13158). For some applications, such as Monte-Carlo estimations, this is not a big problem, since you just have to submit more tasks that expected. But for other tasks, such as partionning a big domain for independant computations (e.g. computing the Mandelbrot Set), this prohibits its usage. Under Linux, I have no problem using simultaneously 40 cores. S. From arvid at softube.com Sun Feb 15 16:07:54 2015 From: arvid at softube.com (=?utf-8?Q?Arvid_Ros=C3=A9n?=) Date: Sun, 15 Feb 2015 16:07:54 +0100 Subject: [Scilab-users] Parallel execution on Mac In-Reply-To: <54E0AE31.6070403@utc.fr> References: <2B62F3E8-30D2-485D-A861-93ECB82C9EEF@softube.com> <54E0AE31.6070403@utc.fr> Message-ID: <3B2A3EB6-101A-4431-902F-C8C8F475C3F2@softube.com> Yeah, I know about that bug. Does anyone know if it is difficult to fix? I have used a custom solution for parallel processing previously (using fork), but it crashes now in libBLAS. /Arvid > 15 feb 2015 kl. 15:46 skrev St?phane Mottelet : > > Le 15/02/2015 12:01, Arvid Ros?n a ?crit : >> Anyone out there who is using any successful ways of running heavy scilab jobs in parallel across several cores on Mac OS? >> >> 8 cores, with 1 in use is, is a bit annoying when you are sitting there waiting for the processing to finish. >> >> Cheers, >> Arvid >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users > Hello, > > The problem is that parallel_run is still broken under MacOSX (see http://bugzilla.scilab.org/show_bug.cgi?id=13158). For some applications, such as Monte-Carlo estimations, this is not a big problem, since you just have to submit more tasks that expected. But for other tasks, such as partionning a big domain for independant computations (e.g. computing the Mandelbrot Set), this prohibits its usage. Under Linux, I have no problem using simultaneously 40 cores. > > S. > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users From bmbouter at gmail.com Tue Feb 17 15:03:03 2015 From: bmbouter at gmail.com (Brian Bouterse) Date: Tue, 17 Feb 2015 09:03:03 -0500 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting In-Reply-To: References: Message-ID: Hi Eric, Thanks for the reply! Yes you understand my goals correctly, but one clarification: It would be better to have the estimated values directly instead of the filtered state probabilities. I usually get these with ms_forecast(r, n). I've been reading through the grocer code to determine how to write the function you suggest. I do need it sooner than a few weeks so I'm attempting to do it. It seems straightforward except for the y_hat, x_hat, and z_hat variables I need to provide to MSVAR_Filt.(). Here are some questions: 1) You say I need to feed MSVAR_Filt() with y_hat, x_hat, and z_hat, but the variables in the function signature for MSVAR_Filt read as y_mat,x_mat,z_mat. Did you mean y_mat or y_hat? 2) y_hat (2nd output) is an output of MSVAR_Filt(). The function comments say that is my estimated y. Is that the direct estimates that I am looking for? 3) I read through ms_var() to see how to derive the y_hat, x_hat, and z_hat variables that are needed, but I don't see any code in ms_var that derive these variables. Can you more specifically point out where the code is that shows the derivation of these matrices? Separate from those questions I am wondering what kind of bias is introduced if I use the filtered probabilities from ms_var? Could I use those instead of attempting to predict with data set A and evaluate with data set B. The reason I like the two data set methodology is that the training data (A) is separated from the evaluation data (B) so there can't be any bias in terms of measuring how the trained data generalizes when benchmarked on evaluation data because the training model never saw data set (B). Chapter 23 says the filtered probabilities only use data up until that point in time, but it uses estimates that were built from all information that is available. It seems biased to evaluate the residuals using filtered probabilities (or smoothed probabilities) because training and evaluating error on the same data set seems wrong. What do you think the right way is to use these tools to avoid bias when measuring error of model performance? Thanks for any information. Also is there any possibility for us to chat on IRC? I'm 'bmbouter' in #scilab on freenode if you want to chat there. It would probably be faster than e-mail. Thanks! Brian On Thu, Feb 12, 2015 at 3:44 PM, Eric Dubois wrote: > Dear Brian. > > If I have well understood, you want: > - to estimate a ms_var model on a subset of your dataset; > - recover the estimated parameters; > - and calculate the filtered state probabilities on the other part of your > dataset with these parameters. > > This can be done: > - the function MSVAR_Filt calculates among other the filetered > probabilities (5th output); > - the function needs among other things the parameters of the model; they > can be recovered from the output tlist of function ms_var; if give it the > name res (with --> res=ms_var(...)): this is the field 'coeff' in the > output tlist (res('coeff') with this example); > > But the function MSVAR_Filt also has to be fed with matrices y_hat, x_hat > and z_hat that are matrices derived from the matrix of endogenous and > exogenous variables (see function ms_var to see how it is done). > > If you are not too in a hurry, I can write the function that gathers all > these operations within a few weeks. > > ?ric. > > 2015-02-12 16:56 GMT+01:00 Brian Bouterse : > >> I use GROCER's ms_var function to estimate a single variable VAR model, >> and it estimates parameters as expected and described by the manual. I want >> to train and evaluate my model on different data sets to avoid bias from >> training and benchmarking on the same data set. How can this be done? >> >> For example consider data set A (month 1) and data set B (month 2) from a >> 2 month sample. I would like to train on month 1 and then benchmark on >> month 2. >> >> I use ms_var to train on data set A. It gives me estimated parameters and >> filtered regime probabilities. That works well. How can I use the trained >> parameters to then estimate on month 2 data? >> >> I'm aware of the ms_forecast function, but it seems to only forecast >> using the results from an estimator like ms_var(). The forecasting will >> then only be done on the same data as was used for estimating. I want to >> use the trained parameters to product estimates for a different data set. >> >> Thanks in advance. I really appreciate being able to use this software. >> >> -Brian >> >> -- >> Brian Bouterse >> >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users >> >> > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > -- Brian Bouterse -------------- next part -------------- An HTML attachment was scrubbed... URL: From grocer.toolbox at gmail.com Tue Feb 17 21:50:34 2015 From: grocer.toolbox at gmail.com (Eric Dubois) Date: Tue, 17 Feb 2015 21:50:34 +0100 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting In-Reply-To: References: Message-ID: Dear Brian. 1) sorry, I made indeed a typo and wanted to speak about y_mat, x_mat and z_mat. 2) I do not know exactly what you want, but you can calculate what you want from the parameters and all other inputs 3) you will find attached a function run_ms_var that performs, I hope, what you need: this function takes a results tlist from a ms_var execution and a vector of endogenous variables to feed the VAR (your benchmark data). I have checked that if you give as endogenous variables exactly the same variables as the one used for estimation, you recover the same yhat, filtered probs, etc. To use the function, you have to save it in a folder, say c:/newms, and run into Scilab --> getd('c:/newms) To check what I mentionned above, run: --> load(GROCERDIR+'\data\us_revu.dat') --> bounds('1967m4','2004m2') --> nb_states=2 --> switch_var=2 // variances are switching --> var_opt=3 // heteroskedastik var-cov matrix --> r=ms_var('cte',3,'100*(log(us_revu)-lagts(2,log(us_revu)))',nb_states,switch_var,var_opt,'prt=initial;final','transf=stud') --> [y_hat,resid,PR,PR_STT,PR_STL]=run_ms_var(r,'100*(log(us_revu)-lagts(2,log(us_revu)))' --> PR_STT-r('filtered probs') The function is rather rough (no header, no options,...) and can be improved, but I hope it answers your needs. ?ric. 2015-02-17 15:03 GMT+01:00 Brian Bouterse : > Hi Eric, > > Thanks for the reply! Yes you understand my goals correctly, but one > clarification: It would be better to have the estimated values directly > instead of the filtered state probabilities. I usually get these with > ms_forecast(r, n). > > I've been reading through the grocer code to determine how to write the > function you suggest. I do need it sooner than a few weeks so I'm > attempting to do it. It seems straightforward except for the y_hat, x_hat, > and z_hat variables I need to provide to MSVAR_Filt.(). Here are some > questions: > > 1) You say I need to feed MSVAR_Filt() with y_hat, x_hat, and z_hat, but > the variables in the function signature for MSVAR_Filt read > as y_mat,x_mat,z_mat. Did you mean y_mat or y_hat? > > 2) y_hat (2nd output) is an output of MSVAR_Filt(). The function comments > say that is my estimated y. Is that the direct estimates that I am looking > for? > > 3) I read through ms_var() to see how to derive the y_hat, x_hat, and > z_hat variables that are needed, but I don't see any code in ms_var that > derive these variables. Can you more specifically point out where the code > is that shows the derivation of these matrices? > > Separate from those questions I am wondering what kind of bias is > introduced if I use the filtered probabilities from ms_var? Could I use > those instead of attempting to predict with data set A and evaluate with > data set B. The reason I like the two data set methodology is that the > training data (A) is separated from the evaluation data (B) so there can't > be any bias in terms of measuring how the trained data generalizes when > benchmarked on evaluation data because the training model never saw data > set (B). Chapter 23 says the filtered probabilities only use data up until > that point in time, but it uses estimates that were built from all > information that is available. It seems biased to evaluate the residuals > using filtered probabilities (or smoothed probabilities) because training > and evaluating error on the same data set seems wrong. What do you think > the right way is to use these tools to avoid bias when measuring error of > model performance? > > Thanks for any information. Also is there any possibility for us to chat > on IRC? I'm 'bmbouter' in #scilab on freenode if you want to chat there. It > would probably be faster than e-mail. > > Thanks! > Brian > > > On Thu, Feb 12, 2015 at 3:44 PM, Eric Dubois > wrote: > >> Dear Brian. >> >> If I have well understood, you want: >> - to estimate a ms_var model on a subset of your dataset; >> - recover the estimated parameters; >> - and calculate the filtered state probabilities on the other part of >> your dataset with these parameters. >> >> This can be done: >> - the function MSVAR_Filt calculates among other the filetered >> probabilities (5th output); >> - the function needs among other things the parameters of the model; they >> can be recovered from the output tlist of function ms_var; if give it the >> name res (with --> res=ms_var(...)): this is the field 'coeff' in the >> output tlist (res('coeff') with this example); >> >> But the function MSVAR_Filt also has to be fed with matrices y_hat, x_hat >> and z_hat that are matrices derived from the matrix of endogenous and >> exogenous variables (see function ms_var to see how it is done). >> >> If you are not too in a hurry, I can write the function that gathers all >> these operations within a few weeks. >> >> ?ric. >> >> 2015-02-12 16:56 GMT+01:00 Brian Bouterse : >> >>> I use GROCER's ms_var function to estimate a single variable VAR model, >>> and it estimates parameters as expected and described by the manual. I want >>> to train and evaluate my model on different data sets to avoid bias from >>> training and benchmarking on the same data set. How can this be done? >>> >>> For example consider data set A (month 1) and data set B (month 2) from >>> a 2 month sample. I would like to train on month 1 and then benchmark on >>> month 2. >>> >>> I use ms_var to train on data set A. It gives me estimated parameters >>> and filtered regime probabilities. That works well. How can I use the >>> trained parameters to then estimate on month 2 data? >>> >>> I'm aware of the ms_forecast function, but it seems to only forecast >>> using the results from an estimator like ms_var(). The forecasting will >>> then only be done on the same data as was used for estimating. I want to >>> use the trained parameters to product estimates for a different data set. >>> >>> Thanks in advance. I really appreciate being able to use this software. >>> >>> -Brian >>> >>> -- >>> Brian Bouterse >>> >>> _______________________________________________ >>> users mailing list >>> users at lists.scilab.org >>> http://lists.scilab.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users >> >> > > > -- > Brian Bouterse > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: run_ms_var.sci Type: application/octet-stream Size: 2600 bytes Desc: not available URL: From julie.paul at scilab-enterprises.com Thu Feb 19 09:08:03 2015 From: julie.paul at scilab-enterprises.com (Julie PAUL) Date: Thu, 19 Feb 2015 09:08:03 +0100 Subject: [Scilab-users] [Free Webinar] Solar energy system optimization with Scilab & Optimus Message-ID: <13f473a1c5db80b0b5dce8c447abd472@scilab-enterprises.com> Dear Scilab users, Noesis Solutions & Scilab Enterprises invite you to join a webinar on March 5, to discover how easily Optimus integrates Scilab?into a design optimization process. In this free webinar, you will see how process integration and multiobjective optimization helped increase the efficiency and minimizing the cost of a solar energy system. Join the Webinar on Thursday March 5 at: - 10:00 AM CET / 04:00 AM EST - 06:00 PM CET / 12:00 PM EST Presenting during this webinar: - Taylor Newill - Application Engineer, Noesis Solutions - Claude Gomez - CEO, Scilab Enterprises Free webinar, registration mandatory on http://pages.noesissolutions.com/2015-03-05-optimus-scilab-enews/?source=sef -- Julie PAUL Scilab Enterprises Communications & Public Relations Director 143bis rue Yves Le Coz - 78000 Versailles (FR) http://www.scilab-enterprises.com - http://www.scilab.org From grivet at cnrs-orleans.fr Thu Feb 19 12:19:32 2015 From: grivet at cnrs-orleans.fr (grivet) Date: Thu, 19 Feb 2015 12:19:32 +0100 Subject: [Scilab-users] write permission In-Reply-To: References: <54C3E4FB.6010903@gmail.com> Message-ID: <54E5C6C4.9010602@cnrs-orleans.fr> Hello, During the last year, I have been creating animations with Scilab (5.4.1 and lately 5.5.0 under Win7). Suddenly yesterday, my program failed with the error message xs2gif(gcf(),nomf); !--error 999 xs2gif : Impossible de cr?er le fichier d'exportation, permission refus?e. I checked that I could write into the relevant directory using notepad++, LibreOffice and other software. I switched back to Scilab 5.4.1 and met the same problem. I tried programs that used to work and they failed with the same error message. Any idea on the cause and cure of this problem ? Thanks in advance JP Grivet From bmbouter at gmail.com Thu Feb 19 14:28:53 2015 From: bmbouter at gmail.com (Brian Bouterse) Date: Thu, 19 Feb 2015 08:28:53 -0500 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting In-Reply-To: References: Message-ID: Hi Eric, Thank you so much for the function. The verification step you demonstrate are convincing that the implementation produces the correct filtered probability result on the benchmark data. I've been able to reproduce your demo results, and also apply it to my own data set. This is great! There is one more thing that I'm not sure how to do for the single variable case. How can I take the results I have from run_ms_var() and use them with ms_forecast() to produce a single variable filtered estimate? The results I have are [y_hat,resid,PR,PR_STT,PR_STL]. I imagine this could be done using the following pseudocode: for each time step in PR_STT: select the regime with the highest filtered probability for this time step (ie: say regime N). This is like a maximum likelihood selection. select the autoregressive parameters for regime N from the original training step forecast the next time step using the autoregressive parameters using regime N This seems very similar to what ms_forecast() can do, but I'm not sure how to call ms_forecast given only the existence of parameters [y_hat,resid,PR,PR_STT,PR_STL]. Is this possible? Perhaps one of the variables [y_hat,resid,PR,PR_STT,PR_STL] already contains what I am looking for, but I want to be sure that it is based on the filtered probabilities and not considering data that comes later in the data set than the point of prediction. Does that make sense? In other words I want to predict the specific value at time t, and only consider data on the interval [0, t-1]. Thanks again for everything you've done including writing this, helping me, responding so quickly, etc. This is really great. -Brian On Tue, Feb 17, 2015 at 3:50 PM, Eric Dubois wrote: > Dear Brian. > > 1) sorry, I made indeed a typo and wanted to speak about y_mat, x_mat and > z_mat. > > 2) I do not know exactly what you want, but you can calculate what you > want from the parameters and all other inputs > > 3) you will find attached a function run_ms_var that performs, I hope, > what you need: this function takes a results tlist from a ms_var execution > and a vector of endogenous variables to feed the VAR (your benchmark data). > > I have checked that if you give as endogenous variables exactly the same > variables as the one used for estimation, you recover the same yhat, > filtered probs, etc. > > To use the function, you have to save it in a folder, say c:/newms, and > run into Scilab > --> getd('c:/newms) > > To check what I mentionned above, run: > --> load(GROCERDIR+'\data\us_revu.dat') > --> bounds('1967m4','2004m2') > --> nb_states=2 > --> switch_var=2 // variances are switching > --> var_opt=3 // heteroskedastik var-cov matrix > > --> r=ms_var('cte',3,'100*(log(us_revu)-lagts(2,log(us_revu)))',nb_states,switch_var,var_opt,'prt=initial;final','transf=stud') > > --> [y_hat,resid,PR,PR_STT,PR_STL]=run_ms_var(r,'100*(log(us_revu)-lagts(2,log(us_revu)))' > --> PR_STT-r('filtered probs') > > The function is rather rough (no header, no options,...) and can be > improved, but I hope it answers your needs. > > ?ric. > > > > > 2015-02-17 15:03 GMT+01:00 Brian Bouterse : > >> Hi Eric, >> >> Thanks for the reply! Yes you understand my goals correctly, but one >> clarification: It would be better to have the estimated values directly >> instead of the filtered state probabilities. I usually get these with >> ms_forecast(r, n). >> >> I've been reading through the grocer code to determine how to write the >> function you suggest. I do need it sooner than a few weeks so I'm >> attempting to do it. It seems straightforward except for the y_hat, x_hat, >> and z_hat variables I need to provide to MSVAR_Filt.(). Here are some >> questions: >> >> 1) You say I need to feed MSVAR_Filt() with y_hat, x_hat, and z_hat, but >> the variables in the function signature for MSVAR_Filt read >> as y_mat,x_mat,z_mat. Did you mean y_mat or y_hat? >> >> 2) y_hat (2nd output) is an output of MSVAR_Filt(). The function comments >> say that is my estimated y. Is that the direct estimates that I am looking >> for? >> >> 3) I read through ms_var() to see how to derive the y_hat, x_hat, and >> z_hat variables that are needed, but I don't see any code in ms_var that >> derive these variables. Can you more specifically point out where the code >> is that shows the derivation of these matrices? >> >> Separate from those questions I am wondering what kind of bias is >> introduced if I use the filtered probabilities from ms_var? Could I use >> those instead of attempting to predict with data set A and evaluate with >> data set B. The reason I like the two data set methodology is that the >> training data (A) is separated from the evaluation data (B) so there can't >> be any bias in terms of measuring how the trained data generalizes when >> benchmarked on evaluation data because the training model never saw data >> set (B). Chapter 23 says the filtered probabilities only use data up until >> that point in time, but it uses estimates that were built from all >> information that is available. It seems biased to evaluate the residuals >> using filtered probabilities (or smoothed probabilities) because training >> and evaluating error on the same data set seems wrong. What do you think >> the right way is to use these tools to avoid bias when measuring error of >> model performance? >> >> Thanks for any information. Also is there any possibility for us to chat >> on IRC? I'm 'bmbouter' in #scilab on freenode if you want to chat there. It >> would probably be faster than e-mail. >> >> Thanks! >> Brian >> >> >> On Thu, Feb 12, 2015 at 3:44 PM, Eric Dubois >> wrote: >> >>> Dear Brian. >>> >>> If I have well understood, you want: >>> - to estimate a ms_var model on a subset of your dataset; >>> - recover the estimated parameters; >>> - and calculate the filtered state probabilities on the other part of >>> your dataset with these parameters. >>> >>> This can be done: >>> - the function MSVAR_Filt calculates among other the filetered >>> probabilities (5th output); >>> - the function needs among other things the parameters of the model; >>> they can be recovered from the output tlist of function ms_var; if give it >>> the name res (with --> res=ms_var(...)): this is the field 'coeff' in the >>> output tlist (res('coeff') with this example); >>> >>> But the function MSVAR_Filt also has to be fed with matrices y_hat, >>> x_hat and z_hat that are matrices derived from the matrix of endogenous and >>> exogenous variables (see function ms_var to see how it is done). >>> >>> If you are not too in a hurry, I can write the function that gathers all >>> these operations within a few weeks. >>> >>> ?ric. >>> >>> 2015-02-12 16:56 GMT+01:00 Brian Bouterse : >>> >>>> I use GROCER's ms_var function to estimate a single variable VAR model, >>>> and it estimates parameters as expected and described by the manual. I want >>>> to train and evaluate my model on different data sets to avoid bias from >>>> training and benchmarking on the same data set. How can this be done? >>>> >>>> For example consider data set A (month 1) and data set B (month 2) from >>>> a 2 month sample. I would like to train on month 1 and then benchmark on >>>> month 2. >>>> >>>> I use ms_var to train on data set A. It gives me estimated parameters >>>> and filtered regime probabilities. That works well. How can I use the >>>> trained parameters to then estimate on month 2 data? >>>> >>>> I'm aware of the ms_forecast function, but it seems to only forecast >>>> using the results from an estimator like ms_var(). The forecasting will >>>> then only be done on the same data as was used for estimating. I want to >>>> use the trained parameters to product estimates for a different data set. >>>> >>>> Thanks in advance. I really appreciate being able to use this software. >>>> >>>> -Brian >>>> >>>> -- >>>> Brian Bouterse >>>> >>>> _______________________________________________ >>>> users mailing list >>>> users at lists.scilab.org >>>> http://lists.scilab.org/mailman/listinfo/users >>>> >>>> >>> >>> _______________________________________________ >>> users mailing list >>> users at lists.scilab.org >>> http://lists.scilab.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Brian Bouterse >> >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users >> >> > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > -- Brian Bouterse -------------- next part -------------- An HTML attachment was scrubbed... URL: From grocer.toolbox at gmail.com Thu Feb 19 21:43:50 2015 From: grocer.toolbox at gmail.com (Eric Dubois) Date: Thu, 19 Feb 2015 21:43:50 +0100 Subject: [Scilab-users] Using GROCER ms_var parameters for forecasting In-Reply-To: References: Message-ID: Dear Brian You cannot perform forecasts with the results fo the function I sent you, because these results are under a matrix form while ms_forecast needs a results tlist (typed list). What is needed is therefore a results tlist with all needed fields to make forecasts. You will find enclosed a new ms_var_run function that makes that. What I have done is replacing the results that are new in the results tlist estimated, while keeping all invariant results (suach as estimated parameters, t-stats,...): I think I have done it properly, but I cannot insure you that it is the case. Starting for the previous example, replace: --> [y_hat,resid,PR,PR_STT,PR_STL]=run_ms_var(r,'100*(log( us_revu)-lagts(2,log(us_revu)))' with: -->newr=run_ms_var(r,'100*(log(us_revu)-lagts(2,log(us_revu)))' and then make a forecast with: --> rf=ms_forecast(newr,'2004m12') Again, the function is rough and should be improved somehow. ?ric. 2015-02-19 14:28 GMT+01:00 Brian Bouterse : > Hi Eric, > > Thank you so much for the function. The verification step you demonstrate > are convincing that the implementation produces the correct filtered > probability result on the benchmark data. I've been able to reproduce your > demo results, and also apply it to my own data set. This is great! > > There is one more thing that I'm not sure how to do for the single > variable case. How can I take the results I have from run_ms_var() and use > them with ms_forecast() to produce a single variable filtered estimate? The > results I have are [y_hat,resid,PR,PR_STT,PR_STL]. I imagine this could > be done using the following pseudocode: > > for each time step in PR_STT: > select the regime with the highest filtered probability for this time > step (ie: say regime N). This is like a maximum likelihood selection. > select the autoregressive parameters for regime N from the original > training step > forecast the next time step using the autoregressive parameters using > regime N > > This seems very similar to what ms_forecast() can do, but I'm not sure how > to call ms_forecast given only the existence of parameters > [y_hat,resid,PR,PR_STT,PR_STL]. Is this possible? > > Perhaps one of the variables [y_hat,resid,PR,PR_STT,PR_STL] already > contains what I am looking for, but I want to be sure that it is based on > the filtered probabilities and not considering data that comes later in the > data set than the point of prediction. Does that make sense? In other words > I want to predict the specific value at time t, and only consider data on > the interval [0, t-1]. > > Thanks again for everything you've done including writing this, helping > me, responding so quickly, etc. This is really great. > > -Brian > > > > On Tue, Feb 17, 2015 at 3:50 PM, Eric Dubois > wrote: > >> Dear Brian. >> >> 1) sorry, I made indeed a typo and wanted to speak about y_mat, x_mat and >> z_mat. >> >> 2) I do not know exactly what you want, but you can calculate what you >> want from the parameters and all other inputs >> >> 3) you will find attached a function run_ms_var that performs, I hope, >> what you need: this function takes a results tlist from a ms_var execution >> and a vector of endogenous variables to feed the VAR (your benchmark data). >> >> I have checked that if you give as endogenous variables exactly the same >> variables as the one used for estimation, you recover the same yhat, >> filtered probs, etc. >> >> To use the function, you have to save it in a folder, say c:/newms, and >> run into Scilab >> --> getd('c:/newms) >> >> To check what I mentionned above, run: >> --> load(GROCERDIR+'\data\us_revu.dat') >> --> bounds('1967m4','2004m2') >> --> nb_states=2 >> --> switch_var=2 // variances are switching >> --> var_opt=3 // heteroskedastik var-cov matrix >> >> --> r=ms_var('cte',3,'100*(log(us_revu)-lagts(2,log(us_revu)))',nb_states,switch_var,var_opt,'prt=initial;final','transf=stud') >> >> --> [y_hat,resid,PR,PR_STT,PR_STL]=run_ms_var(r,'100*(log(us_revu)-lagts(2,log(us_revu)))' >> --> PR_STT-r('filtered probs') >> >> The function is rather rough (no header, no options,...) and can be >> improved, but I hope it answers your needs. >> >> ?ric. >> >> >> >> >> 2015-02-17 15:03 GMT+01:00 Brian Bouterse : >> >>> Hi Eric, >>> >>> Thanks for the reply! Yes you understand my goals correctly, but one >>> clarification: It would be better to have the estimated values directly >>> instead of the filtered state probabilities. I usually get these with >>> ms_forecast(r, n). >>> >>> I've been reading through the grocer code to determine how to write the >>> function you suggest. I do need it sooner than a few weeks so I'm >>> attempting to do it. It seems straightforward except for the y_hat, x_hat, >>> and z_hat variables I need to provide to MSVAR_Filt.(). Here are some >>> questions: >>> >>> 1) You say I need to feed MSVAR_Filt() with y_hat, x_hat, and z_hat, but >>> the variables in the function signature for MSVAR_Filt read >>> as y_mat,x_mat,z_mat. Did you mean y_mat or y_hat? >>> >>> 2) y_hat (2nd output) is an output of MSVAR_Filt(). The function >>> comments say that is my estimated y. Is that the direct estimates that I am >>> looking for? >>> >>> 3) I read through ms_var() to see how to derive the y_hat, x_hat, and >>> z_hat variables that are needed, but I don't see any code in ms_var that >>> derive these variables. Can you more specifically point out where the code >>> is that shows the derivation of these matrices? >>> >>> Separate from those questions I am wondering what kind of bias is >>> introduced if I use the filtered probabilities from ms_var? Could I use >>> those instead of attempting to predict with data set A and evaluate with >>> data set B. The reason I like the two data set methodology is that the >>> training data (A) is separated from the evaluation data (B) so there can't >>> be any bias in terms of measuring how the trained data generalizes when >>> benchmarked on evaluation data because the training model never saw data >>> set (B). Chapter 23 says the filtered probabilities only use data up until >>> that point in time, but it uses estimates that were built from all >>> information that is available. It seems biased to evaluate the residuals >>> using filtered probabilities (or smoothed probabilities) because training >>> and evaluating error on the same data set seems wrong. What do you think >>> the right way is to use these tools to avoid bias when measuring error of >>> model performance? >>> >>> Thanks for any information. Also is there any possibility for us to chat >>> on IRC? I'm 'bmbouter' in #scilab on freenode if you want to chat there. It >>> would probably be faster than e-mail. >>> >>> Thanks! >>> Brian >>> >>> >>> On Thu, Feb 12, 2015 at 3:44 PM, Eric Dubois >>> wrote: >>> >>>> Dear Brian. >>>> >>>> If I have well understood, you want: >>>> - to estimate a ms_var model on a subset of your dataset; >>>> - recover the estimated parameters; >>>> - and calculate the filtered state probabilities on the other part of >>>> your dataset with these parameters. >>>> >>>> This can be done: >>>> - the function MSVAR_Filt calculates among other the filetered >>>> probabilities (5th output); >>>> - the function needs among other things the parameters of the model; >>>> they can be recovered from the output tlist of function ms_var; if give it >>>> the name res (with --> res=ms_var(...)): this is the field 'coeff' in the >>>> output tlist (res('coeff') with this example); >>>> >>>> But the function MSVAR_Filt also has to be fed with matrices y_hat, >>>> x_hat and z_hat that are matrices derived from the matrix of endogenous and >>>> exogenous variables (see function ms_var to see how it is done). >>>> >>>> If you are not too in a hurry, I can write the function that gathers >>>> all these operations within a few weeks. >>>> >>>> ?ric. >>>> >>>> 2015-02-12 16:56 GMT+01:00 Brian Bouterse : >>>> >>>>> I use GROCER's ms_var function to estimate a single variable VAR >>>>> model, and it estimates parameters as expected and described by the >>>>> manual. I want to train and evaluate my model on different data sets to >>>>> avoid bias from training and benchmarking on the same data set. How can >>>>> this be done? >>>>> >>>>> For example consider data set A (month 1) and data set B (month 2) >>>>> from a 2 month sample. I would like to train on month 1 and then benchmark >>>>> on month 2. >>>>> >>>>> I use ms_var to train on data set A. It gives me estimated parameters >>>>> and filtered regime probabilities. That works well. How can I use the >>>>> trained parameters to then estimate on month 2 data? >>>>> >>>>> I'm aware of the ms_forecast function, but it seems to only forecast >>>>> using the results from an estimator like ms_var(). The forecasting will >>>>> then only be done on the same data as was used for estimating. I want to >>>>> use the trained parameters to product estimates for a different data set. >>>>> >>>>> Thanks in advance. I really appreciate being able to use this software. >>>>> >>>>> -Brian >>>>> >>>>> -- >>>>> Brian Bouterse >>>>> >>>>> _______________________________________________ >>>>> users mailing list >>>>> users at lists.scilab.org >>>>> http://lists.scilab.org/mailman/listinfo/users >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> users mailing list >>>> users at lists.scilab.org >>>> http://lists.scilab.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Brian Bouterse >>> >>> _______________________________________________ >>> users mailing list >>> users at lists.scilab.org >>> http://lists.scilab.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users >> >> > > > -- > Brian Bouterse > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: run_ms_var.sci Type: application/octet-stream Size: 3498 bytes Desc: not available URL: From hmmorae at gmail.com Fri Feb 20 15:57:36 2015 From: hmmorae at gmail.com (Hector Mora) Date: Fri, 20 Feb 2015 09:57:36 -0500 Subject: [Scilab-users] linux command line In-Reply-To: <1831192031.366541834.1423673335776.JavaMail.root@zimbra75-e12.priv.proxad.net> References: <1831192031.366541834.1423673335776.JavaMail.root@zimbra75-e12.priv.proxad.net> Message-ID: Thanks a lot. Ca marche. 2015-02-11 11:48 GMT-05:00 : > Hello, > > >I would like to give, in Scilab, a linux command line. It's possible? > > > >For example mv file1.txt file2.txt > > yes, you may use one of the unix..() or host() functions: > > http://help.scilab.org/docs/5.5.1/en_US/section_7182261dbbb2bb2293bb9166ba5f1fb3.html > > Regards > Samuel > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huubvanniekerk at yahoo.com Mon Feb 23 20:34:46 2015 From: huubvanniekerk at yahoo.com (hvn) Date: Mon, 23 Feb 2015 19:34:46 +0000 (UTC) Subject: [Scilab-users] plotting 2 robots in same graphic Message-ID: Hi, I'm using v 5.5.1 with the Robotics Toolbox and want to display 2 manipulators in the same graphic. Following the Help and the Toolbox demo, I write this code: //define links L1 = Link('d',0,'a',1,'alpha',%pi/2); L2 = Link('d',0,'a',1,'alpha',%pi/4); L3 = Link('d',0,'a',1,'alpha',%pi/3); L4 = Link('d',0,'a',1,'alpha',%pi/5); // create lists La = list(L1, L2); Lb = list(L3, L4); //couple and create robots bot1 = SerialLink(La, 'name', 'bot1'); //bot2 = SerialLink(Lb, 'name', 'bot2','base',transl(-1, 1, 0)); //define end effector positions pos1 = [0.1 0.2]; pos2 = [0.2 0.8]; // draw robots plot_robot(bot1, pos1); bot2 = SerialLink(Lb, 'name', 'bot2','base',transl(-1, 1, 0)); plot_robot(bot2, pos2) ending up with 1 manipulator in 1 graphic. What do I do wrong ? Thanks From tim at wescottdesign.com Mon Feb 23 21:06:13 2015 From: tim at wescottdesign.com (Tim Wescott) Date: Mon, 23 Feb 2015 12:06:13 -0800 Subject: [Scilab-users] plotting 2 robots in same graphic In-Reply-To: References: Message-ID: <1424721973.2129.168.camel@servo> On Mon, 2015-02-23 at 19:34 +0000, hvn wrote: > Hi, > > I'm using v 5.5.1 with the Robotics Toolbox and want to display 2 > manipulators in the same graphic. Following the Help and the Toolbox > demo, I write this code: > > //define links > L1 = Link('d',0,'a',1,'alpha',%pi/2); > L2 = Link('d',0,'a',1,'alpha',%pi/4); > L3 = Link('d',0,'a',1,'alpha',%pi/3); > L4 = Link('d',0,'a',1,'alpha',%pi/5); > > // create lists > La = list(L1, L2); > Lb = list(L3, L4); > > //couple and create robots > bot1 = SerialLink(La, 'name', 'bot1'); > //bot2 = SerialLink(Lb, 'name', 'bot2','base',transl(-1, 1, 0)); > > //define end effector positions > pos1 = [0.1 0.2]; > pos2 = [0.2 0.8]; > > // draw robots > plot_robot(bot1, pos1); > bot2 = SerialLink(Lb, 'name', 'bot2','base',transl(-1, 1, 0)); > plot_robot(bot2, pos2) > > ending up with 1 manipulator in 1 graphic. What do I do wrong ? I have no direct experience with that toolbox, but if you're ending up with a drawing of the second bot's manipulator each time, "plot_robot" probably clears the figure before plotting. A quick tour of the plot_robot code may tell you if that's happening. It's probably in a file called plot_robot.sci, if the authors of the toolbox have adhered to the naming conventions. If there's a way of plucking the contents out of one graphic and inserting them into another that may be a work-around. You'd have two plots, but at least one would have what you want. I don't know if that is possible -- but I'd ask the question here, and see what response I got. -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 From tim at wescottdesign.com Tue Feb 24 22:38:40 2015 From: tim at wescottdesign.com (Tim Wescott) Date: Tue, 24 Feb 2015 13:38:40 -0800 Subject: [Scilab-users] Large variables and execution speeds Message-ID: <1424813920.2129.194.camel@servo> I have an algorithm that I'm working on that involves having large data sets, which I'm currently representing as tlists. Due to the constraints of the algorithm, I'm doing many calls that are more or less of the form: my_tlist = some_function(my_tlist); The intent is to get the same effect that I would get if I were in C or C++, and wrote: some_function(& my_structure); or my_class.some_function(); It appears, from the significant loss of execution speed when I do this, that Scilab is copying the results of the function into the "my_tlist" variable byte by byte. At this writing, the only way that I can see to fix this is to invoke the function as: some_function("my_tlist"); and then wherever I modify data have use an exec function, i.e., replace local_tlist.some_field = stuff; with exec(msprintf("%s = stuff", local_tlist_name)); This seems clunky in the extreme. Is there another way to do something like this that doesn't force Scilab to copy large chunks of data needlessly, but allows me to operate on multiple copies of similar tlists? Thanks. -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 From clement.david at scilab-enterprises.com Wed Feb 25 09:16:31 2015 From: clement.david at scilab-enterprises.com (=?ISO-8859-1?Q?Cl=E9ment?= David) Date: Wed, 25 Feb 2015 09:16:31 +0100 Subject: [Scilab-users] Large variables and execution speeds In-Reply-To: <1424813920.2129.194.camel@servo> References: <1424813920.2129.194.camel@servo> Message-ID: <1424852191.2265.9.camel@scilab-enterprises.com> Hello Tim, Yes, the copies are there as Scilab does not allow to pass data by reference but only by copy. To avoid extra copy when using tlist, you have to avoid data resize which is really costly. When writing : local_tlist.some_field = stuff; There is no performance penalty if 'some_field' and 'stuff' have the same datatype and size. Scilab will be able to simply reuse the allocated space. However using a tlist rhs/lhs force a copy which is not needed, using named arguments let the interpreter avoid some copies. For a complete bench, see the attached file. Le mardi 24 f?vrier 2015 ? 13:38 -0800, Tim Wescott a ?crit : > I have an algorithm that I'm working on that involves having large data > sets, which I'm currently representing as tlists. Due to the > constraints of the algorithm, I'm doing many calls that are more or less > of the form: > > my_tlist = some_function(my_tlist); > > The intent is to get the same effect that I would get if I were in C or > C++, and wrote: > > some_function(& my_structure); > > or > > my_class.some_function(); > > It appears, from the significant loss of execution speed when I do this, > that Scilab is copying the results of the function into the "my_tlist" > variable byte by byte. > > At this writing, the only way that I can see to fix this is to invoke > the function as: > > some_function("my_tlist"); > > and then wherever I modify data have use an exec function, i.e., replace > > local_tlist.some_field = stuff; > > with > > exec(msprintf("%s = stuff", local_tlist_name)); > > This seems clunky in the extreme. > > Is there another way to do something like this that doesn't force Scilab > to copy large chunks of data needlessly, but allows me to operate on > multiple copies of similar tlists? > > Thanks. > -- Cl?ment -------------- next part -------------- A non-text attachment was scrubbed... Name: reduced.sce Type: application/x-scilab-sce Size: 768 bytes Desc: not available URL: From js.stoezel at gmail.com Wed Feb 25 15:33:48 2015 From: js.stoezel at gmail.com (Jean-Sebastien Stoezel) Date: Wed, 25 Feb 2015 07:33:48 -0700 (MST) Subject: [Scilab-users] Save hypermatrix in matlab format In-Reply-To: <1405598368070-4030930.post@n3.nabble.com> References: <1405431883405-4030916.post@n3.nabble.com> <53C6313A.8020400@laas.fr> <1405518015497-4030925.post@n3.nabble.com> <53C78AC1.8060904@laas.fr> <1405598368070-4030930.post@n3.nabble.com> Message-ID: <1424874828155-4031745.post@n3.nabble.com> Hi: I'd like to revive this thread as I am having the same problem as originally reported. I need to modify a mat file that was potentially generated by mat file. When I load it using loadmatfile, several variables and objects a created, including hypermatrixes. I get the same error as originally reported, when I try to save these objects (including the hypermatric) back into the mat file. Is there a way to load a mat file without generating hypermatrixes? Is there a way to convert hypermatrixes into objects that can be saved into a mat file? Regards, JS -- View this message in context: http://mailinglists.scilab.org/Save-hypermatrix-in-matlab-format-tp4030916p4031745.html Sent from the Scilab users - Mailing Lists Archives mailing list archive at Nabble.com. From tim at wescottdesign.com Wed Feb 25 20:20:37 2015 From: tim at wescottdesign.com (Tim Wescott) Date: Wed, 25 Feb 2015 11:20:37 -0800 Subject: [Scilab-users] Large variables and execution speeds In-Reply-To: <1424852191.2265.9.camel@scilab-enterprises.com> References: <1424813920.2129.194.camel@servo> <1424852191.2265.9.camel@scilab-enterprises.com> Message-ID: <1424892037.2129.207.camel@servo> I'm specifically hoping to avoid just exchanging pieces of the top-level object, because it's pretty abstract -- I need to count on it doing something generally similar, but possibly entirely different in the details. I realized, however, that I'm storing some pretty big arrays (several arrays of 1,000,000 words each is big, yes?) in there where I only need to store buffers -- I'm going to rearrange my code, which I think and hope should dramatically speed things up. On Wed, 2015-02-25 at 09:16 +0100, Cl?ment David wrote: > Hello Tim, > > Yes, the copies are there as Scilab does not allow to pass data by > reference but only by copy. > > To avoid extra copy when using tlist, you have to avoid data resize > which is really costly. When writing : > > local_tlist.some_field = stuff; > > There is no performance penalty if 'some_field' and 'stuff' have the > same datatype and size. Scilab will be able to simply reuse the > allocated space. > > However using a tlist rhs/lhs force a copy which is not needed, using > named arguments let the interpreter avoid some copies. > > For a complete bench, see the attached file. > > Le mardi 24 f?vrier 2015 ? 13:38 -0800, Tim Wescott a ?crit : > > I have an algorithm that I'm working on that involves having large data > > sets, which I'm currently representing as tlists. Due to the > > constraints of the algorithm, I'm doing many calls that are more or less > > of the form: > > > > my_tlist = some_function(my_tlist); > > > > The intent is to get the same effect that I would get if I were in C or > > C++, and wrote: > > > > some_function(& my_structure); > > > > or > > > > my_class.some_function(); > > > > It appears, from the significant loss of execution speed when I do this, > > that Scilab is copying the results of the function into the "my_tlist" > > variable byte by byte. > > > > At this writing, the only way that I can see to fix this is to invoke > > the function as: > > > > some_function("my_tlist"); > > > > and then wherever I modify data have use an exec function, i.e., replace > > > > local_tlist.some_field = stuff; > > > > with > > > > exec(msprintf("%s = stuff", local_tlist_name)); > > > > This seems clunky in the extreme. > > > > Is there another way to do something like this that doesn't force Scilab > > to copy large chunks of data needlessly, but allows me to operate on > > multiple copies of similar tlists? > > > > Thanks. > > > > -- > Cl?ment -- Tim Wescott www.wescottdesign.com Control & Communications systems, circuit & software design. Phone: 503.631.7815 Cell: 503.349.8432 From joodlink at hotmail.com Fri Feb 27 07:53:34 2015 From: joodlink at hotmail.com (Joo Cat) Date: Fri, 27 Feb 2015 14:53:34 +0800 Subject: [Scilab-users] Handling a very long Java method name Message-ID: I have a Java class that has a very long method name. When using the method, Scilab will give a warning and truncates the method to fit 24 characters. As a result, I will get an error saying that it is an invalid field. Is there a way to overcome this? Thanks. --> A = MyClass.new() --> B = A.thisIsAVeryVeryLongMethodName(5) Warning : The identifier : thisIsAVeryVeryLongMethodName has been truncated to: thisIsAVeryVeryLongMetho. !--error 999 %_EObj_e: An error occurred: Exception when calling Java method : Invalid field ThisIsAVeryVeryLongMetho at org.scilab.modules.external_objects_java.ScilabJavaObject.extract(Unknown Source) Invalid field ThisIsAVeryVeryLongMetho at org.scilab.modules.external_objects_java.ScilabJavaObject.extract(Unknown Source) -------------- next part -------------- An HTML attachment was scrubbed... URL: From joodlink at hotmail.com Fri Feb 27 08:05:01 2015 From: joodlink at hotmail.com (Joo Cat) Date: Fri, 27 Feb 2015 15:05:01 +0800 Subject: [Scilab-users] Handling a very long Java method name In-Reply-To: References: Message-ID: Found a workaround. Using jinvoke allows you to use the method! A = MyClass.new() B = jinvoke(A,"thisIsAVeryVeryLongMethodName",5) From: joodlink at hotmail.com To: users at lists.scilab.org Date: Fri, 27 Feb 2015 14:53:34 +0800 Subject: [Scilab-users] Handling a very long Java method name I have a Java class that has a very long method name. When using the method, Scilab will give a warning and truncates the method to fit 24 characters. As a result, I will get an error saying that it is an invalid field. Is there a way to overcome this? Thanks. --> A = MyClass.new() --> B = A.thisIsAVeryVeryLongMethodName(5) Warning : The identifier : thisIsAVeryVeryLongMethodName has been truncated to: thisIsAVeryVeryLongMetho. !--error 999 %_EObj_e: An error occurred: Exception when calling Java method : Invalid field ThisIsAVeryVeryLongMetho at org.scilab.modules.external_objects_java.ScilabJavaObject.extract(Unknown Source) Invalid field ThisIsAVeryVeryLongMetho at org.scilab.modules.external_objects_java.ScilabJavaObject.extract(Unknown Source) _______________________________________________ users mailing list users at lists.scilab.org http://lists.scilab.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent.couvert at scilab-enterprises.com Fri Feb 27 09:25:42 2015 From: vincent.couvert at scilab-enterprises.com (Vincent COUVERT) Date: Fri, 27 Feb 2015 09:25:42 +0100 Subject: [Scilab-users] Colour information for Scilab plots In-Reply-To: <54855EF3.2080704@laas.fr> References: <1417797033651-4031516.post@n3.nabble.com> <54855EF3.2080704@laas.fr> Message-ID: <54F02A06.7020502@scilab-enterprises.com> Hello, Bug #12788 (http://bugzilla.scilab.org/show_bug.cgi?id=12788) has been fixed in Scilab by a JoGL update to version 2.2.4. The fix is available in current nightly-build versions: http://www.scilab.org/en/development/nightly_builds Regards. On 12/08/2014 09:18 AM, Antoine Monmayrant wrote: > On 12/05/2014 05:30 PM, Elweas wrote: >> From which file(s) do Scilab fetch information on how to generate >> colours in >> a Win 7 environment? >> >> When I got my computer running again after its crash, I found that >> Scilab is >> no longer capable of generating the full set of colours in plots >> (blue and >> green are missing or very faint). >> I have removed and re-installed Scilab (completely emptying and >> removing the >> install directory) but that has not helped. Of my programs, it is only >> Scilab that has this problem, at least that I have found so far. >> >> Where on a Windows 7 32-bit system should I check for a damaged file? > > It seems to me that you have a problem with the driver of your videocard. > We had a similar issue here (plots were in shades of red/pink) and it > was due to the videodriver. > Here is the bug: http://bugzilla.scilab.org/show_bug.cgi?id=12788 > From what I remember it was not limited to scilab and also affected > some of the applications that use OpenGL. > > Hope it helps, > > Antoine > >> >> >> >> -- >> View this message in context: >> http://mailinglists.scilab.org/Colour-information-for-Scilab-plots-tp4031516.html >> Sent from the Scilab users - Mailing Lists Archives mailing list >> archive at Nabble.com. >> _______________________________________________ >> users mailing list >> users at lists.scilab.org >> http://lists.scilab.org/mailman/listinfo/users >> > > > _______________________________________________ > users mailing list > users at lists.scilab.org > http://lists.scilab.org/mailman/listinfo/users From and.nardone at gmail.com Fri Feb 27 17:25:21 2015 From: and.nardone at gmail.com (Andrea) Date: Fri, 27 Feb 2015 16:25:21 +0000 (UTC) Subject: [Scilab-users] Scilab NB building & compiling References: <000b01ce8b0d$662bb820$32832860$@carrico@free.fr> <24273.0401609608$1375097538@news.gmane.org> Message-ID: Paul Carrico writes: > > > Dear all > ? > After some difficulties, I passed the first step i.e. building scilab > NB: I failed building scilab including GUI because of flexbook issue while I?ve been using the latest one > ? > On the next step i.e. the compiling, it failed due to javac issues (see attachment) ?: is there a specific release for the Nightly build release ? > ? > Paul > > _______________________________________________ > users mailing list > users at ... > http://lists.scilab.org/mailman/listinfo/users > I hope you solved by now. Didn't you? In case you didn't, I gave up and just did the configure script with the option: --disable-build-help. That is also said in their documentation: http://wiki.scilab.org/Description%20of%20configure%20options If you find another solution, can you please share? Thanks! From clemgill at club-internet.fr Sat Feb 28 08:33:13 2015 From: clemgill at club-internet.fr (Clemgill) Date: Sat, 28 Feb 2015 00:33:13 -0700 (MST) Subject: [Scilab-users] Modnum & Xcos In-Reply-To: <1380696260.1703.5.camel@paros> References: <1380273388.2918.2.camel@pf-X58-USB3> <1380616167.5839.4.camel@paros> <1380656911.2865.15.camel@pf-X58-USB3> <1380696260.1703.5.camel@paros> Message-ID: <1425108793933-4031752.post@n3.nabble.com> Hi all, Any news on the availability of MODNUM converted to XCOS? I am interested in the PLL palette. Thx much, Gilles. -- View this message in context: http://mailinglists.scilab.org/Scilab-users-Modnum-Xcos-tp4027491p4031752.html Sent from the Scilab users - Mailing Lists Archives mailing list archive at Nabble.com.