Question

I have a terrain data which contains elevations as shown below

And each file consists of 1,440,000 rows

.
.
.
183
192
127
.
.
.

how can I access specific rows directly from the file without wasteful full-data memory load? (in Android)

Was it helpful?

Solution

I believe your best option is to convert your text file to a SQLite database.

OTHER TIPS

You would probably want to use a BufferedInputStream: http://developer.android.com/reference/java/io/BufferedInputStream.html

I think, you can use java.nio.FileChannel.read(buffers, start, number).

start means start offset and number is number of bytes to read.

If you can change the file to binary format, you can seek directly to the position you want & read the value you need then. If not you will probably have to read it line by line and return the line you want (assuming you can't calculate the byte position since lines can be of different length).

After playing around a bit too long I got this (it's untested though):

File f = new File ("yourfile.txt");
HashMap <Integer, String> result = readLines(f, 1, 5, 255);
String line5 = result.get(5); // or null if the file had no line 5

private static HashMap <Integer, String> readLines(File f, int... lines) {
    HashMap<Integer, String> result = new HashMap<Integer, String>();
    HashSet<Integer> linesSet = new HashSet<Integer>();
    for (int line : lines) {
        linesSet.add(Integer.valueOf(line));
    }
    BufferedReader br = null;
    try {
        br = new BufferedReader(new InputStreamReader(new FileInputStream(f), "UTF-8"));
        int line = 1; // starting at line 1
        String currentLine = null;
        while ((currentLine = br.readLine()) != null) {
            Integer i = Integer.valueOf(line);
            if (linesSet.contains(i))
                result.put(i, currentLine);
            line++;
        }
    } catch (FileNotFoundException e) {
        // file not found
    } catch (UnsupportedEncodingException e) {
        // bad encoding specified
    } catch (IOException e) {
        // could not read
    } finally {
        if (br != null) {
            try {
                br.close();
            } catch (IOException e) {
                // ignore.
            }
        }
    }
    return result;
}

If the records are fixed length you can calculate and go directly to the byte position of the desired one.

If the records are variable length but include sequential identifying information (such as a record number) for a very large file it might be worthwhile to guess a starting position based on average record length, seek to a bit before that and then read forward to find the desired line (if you are already past it back up a bit).

If there is no way to identify a record other than counting from the beginning, you will have to do that. Ideally you would do so in a way that does not thrash about creating objects during the scanning and then having the garbage collector clean them up...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top