I am trying to process the data from an instrument that I sell. The data is saved as CSV files at one file per day. At normal sampling rates, these files are about 30Meg, and contain about 10 samples per second * 60 sec.minute * 60 min/hour * 24 hours per day = 86400 carriage-return delimited records. I can get the files copied from the instrument’s SD card into a computer with no problem.
Each record contains 8 comma delimited fields (lets say “A,B,C,D,E,F,G,H”) and I need to convert that into longitudinal arrays, lets say A(86400) and B(86400) and so forth.
Originally, I used TextInputStream.ReadLine on the computer file, then split() on the resulting line, appending the split results, field by field, to the longitudinal arrays. This was painfully slow.
Now, I am wondering if I can use MemoryBlocks to improve the speed. I am pretty certain that I can use TextInputStream.ReadAll to put an entire day-file into a MemoryBlock. BUT, the next question is how to separate that into lines (records). Can I use a second MemoryBlock as the result of a split operation that takes the first MemoryBlock as an argument? If so, how do I “configure” that second MemoryBlock so that it behaves as a string array, rather than just a string? I realize that “configure” is not a very good term, here, but the question remains about how to get the MemoryBlock to behave as an array of strings, or, is it not possible?
Should add, here, that this is my first attempt, ever, at using MemoryBlock, so I may not have all of the concepts fully under control!
I might add that a typical use of this device may result in up to 50 day-files, so processing time adds up fast. I also recognize that string arrays may not be the most efficient in the long term and that detabase warrants consideration.
Oregon Research Electronics