<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#330000">
Well, did you generate the excel files from another process? In that
case, you may directly export your data as text.<br>
<br>
About read_csv being very slow: you may try csv_read instead, which
can be installed from atoms :
<a class="moz-txt-link-freetext" href="http://atoms.scilab.org/toolboxes/csv_readwrite">http://atoms.scilab.org/toolboxes/csv_readwrite</a><br>
It's supposed to be much faster<br>
<br>
Also, if you do generate the excel files yourself, consider
exporting numerical values only. If well formated, numerical text
files may be read very quickly with fscanfMat<br>
<br>
Finally, if the data acquisition takes long and you are going to
test several scilab programs on it, i suggest you import the data to
scilab once, then save it from scilab in a scilab-friendly format
(with save("mydata.sav",data1,data2)), and then build your test
programs from a load("mydata.sav"). This way, you wont have to read
the whole data each time.<br>
Oh, a last thing: string management was much faster with scilab4
(which does not support utf8), so this last "first acquisition and
export in *.sav" could be done with sci4 (note that the instaler is
less than 20MB and the sci4 instalation takes very few disk space)<br>
<br>
<br>
On 17/11/2011 18:10, Petter Wingren wrote:
<blockquote
cite="mid:CAOFcBYwF6r=SJnYVQzSvViQ7gC6sWZZ=g7Lh3p1iFd2vCQWdaw@mail.gmail.com"
type="cite">
<pre wrap="">Was hoping not to have to do it that way, as I have a huge amount of
files and every little step I can avoid saves time.
Also read_csv seems to be extremely slow..
I tried reading a 30000 cells sheet, which took 460 seconds
Writing it as .sci and loading it again (after clearing) took 0.2 seconds.
Guess I could make a script to take care of that and reformat all
files during the night.
On Thu, Nov 17, 2011 at 2:30 PM, Adrien Vogt-Schilb
<a class="moz-txt-link-rfc2396E" href="mailto:vogt@centre-cired.fr"><vogt@centre-cired.fr></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">hi
i'd avoid using xls_open, which i found not very reliable with large files.
i'd try to save the excel sheet as a tsv file (text separated by tabs), then
read it from scilab with read_csv, then sparse myself the date strings with
an ad hoc function
let me know if you find dificulties
On 17/11/2011 12:41, Petter Wingren wrote:
I am trying to read an excel file that looks somewhat like this (only
a lot bigger):
Download Time 19:54:08
Download Date 11-21-2010
--------------------------------------------------
11/21/2010 19:43:30 0
11/21/2010 19:43:40 0
11/21/2010 19:43:50 0
11/21/2010 19:44:00 0
11/21/2010 19:44:10 0
11/21/2010 19:44:20 0
11/21/2010 19:44:30 518
11/21/2010 19:44:40 1139
11/21/2010 19:44:50 1035
11/21/2010 19:45:00 501
11/21/2010 19:45:10 449
11/21/2010 19:45:20 901
11/21/2010 19:45:30 545
11/21/2010 19:45:40 113
11/21/2010 19:45:50 1
11/21/2010 19:46:00 37
11/21/2010 19:46:10 17
11/21/2010 19:46:20 71
After I've read it I want to crop it according to a specific time and
keep the values in the second column.
However, when I do
[fd,SST,Sheetnames,Sheetpos] = xls_open('file.xls')
[Value,TextInd] = xls_read(fd,Sheetpos)
Value only contains *** in the first column, and TextInd mostly zeroes.
Any suggestions on how to get those timestamps?
--
Adrien Vogt-Schilb (Cired)
Tel: (+33) 1 43 94 73 77
</pre>
</blockquote>
</blockquote>
<br>
<br>
<div class="moz-signature">-- <br>
Adrien Vogt-Schilb (Cired) <br>
Tel: (+33) 1 43 94 <b>73 77</b></div>
</body>
</html>