<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I have a similar issue with large wav files.<div>Is there any way to read in a large file, for example in buffers, process the buffer(s) as if the data are continuous, and write out results so that stacksize is not exceeded?</div><div>For example, suppose I want to simply filter a large (>100MB) file with a complex FIR filter 401 points long and save the results in another file.</div><div><br></div><div>One issue that I don't understand yet deals with data format. My convolution routine wants the filter kernel as a row vector and the data as a column vector. However, wav files seem to read in as row vectors after which I transpose them. This continuous read-filter-write process would render my present method unworkable. Can a loadwav incoming file be transposed as it is read?</div><div><br></div><div>A related issue -- so far all my tests have been done with known-length files using length(file). But if we don't bring the entire file in before we begging filtering, can we know the length?</div><div><br></div><div>Thanks</div><div>Gary Nelson</div><div><br><div><div>On Jun 7, 2010, at 2:18 AM, Samuel Gougeon wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">
<div bgcolor="#ffffff" text="#000000">
Hello,<br>
<br>
----- Message d'origine ----- <br>
De : Hsu (David) <br>
Date : 05/06/2010 08:20:
<blockquote cite="mid:95ECD1500CCA774592DB2A1705797E6B0117CB9E8E84@HECTOR.network.wisc.edu" type="cite">
<style title="owaParaStyle"><!--P {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
--></style>
<div dir="ltr"><font color="#000000" face="Tahoma" size="2">I am just
learning SciLab. I need to read enormous EEG files
(electroencephalograms), for example, 20 channels sampled at 32,000 Hz
for days on end. This data may be saved as a matrix with 20 columns
but a huge number of rows, so big that I run up against size limits if
I try to load the whole thing as a matrix. This data is usually in
binary format. I tried using stacksize('max') to maximize these size
limits but am still running into size limits.</font></div>
</blockquote>
A simple calculation:<br>
20 channels x 32000 measurements/s/channel x 1 byte/measurement<br>
x 3600x24 s/day ~ 55 Gbytes/day<br>
If you want to load the whole data as a int8() or uint8() matrix<br>
(as minimal data format), you need at least ~ 60 Gbytes of RAM.<br>
Do you have it (assuming that Scilab or another software could handle
it)?<br>
It looks unrealistic to load and handle these data as a whole into a
matrix.<br>
You will likely need to use some detailled binary commands<br>
after mopen() : mseek(), mtell(), mget()... to read some piece of data.<br>
<br>
Samuel<br>
<br>
</div>
</blockquote></div><br><div>
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div>Gary Nelson</div><div><a href="mailto:gnelson@quantasonics.com">gnelson@quantasonics.com</a></div><div><br></div></span><br class="Apple-interchange-newline">
</div>
<br></div></body></html>