Question

I'm implementing a cross-tabulation library in Python as a programming exercise for my new job, and I've got an implementation of the requirements that works but is inelegant and redundant. I'd like a better model for it, something that allows a nice, clean movement of data between the base model, stored as tabular data in flat files, and all of the statistical analysis results that might be asked of this.

Right now, I have a progression from a set of tuples for each row in the table, to a histogram counting the frequencies of the appearances of the tuples of interest, to a serializer that -- somewhat clumsily -- compiles the output into a set of table cells for display. However, I end up having to go back up to the table or to the histogram more often than I want to because there's never enough information in place.

So, any ideas?

Edit: Here's an example of some data, and what I want to be able to build from it. Note that "." denotes a bit of 'missing' data, that is only conditionally counted.

1   .   1
1   0   3
1   0   3
1   2   3
2   .   1
2   0   .
2   2   2
2   2   4
2   2   .

If I were looking at the correlation between columns 0 and 2 above, this is the table I'd have:

    . 1 2 3 4
1   0 1 0 3 0
2   2 1 1 0 1

In addition, I'd want to be able to calculate ratio of frequency/total, frequency/subtotal, &c.

Was it helpful?

Solution

You could use an in-memory sqlite database as a data structure, and define the desired operations as SQL queries.

import sqlite3

c = sqlite3.Connection(':memory:')
c.execute('CREATE TABLE data (a, b, c)')

c.executemany('INSERT INTO data VALUES (?, ?, ?)', [
    (1, None,    1),
    (1,    0,    3),
    (1,    0,    3),
    (1,    2,    3),
    (2, None,    1),
    (2,    0, None),
    (2,    2,    2),
    (2,    2,    4),
    (2,    2, None),
])

# queries
# ...

OTHER TIPS

S W has posted a good basic recipe for this on activestate.com.

The essence seems to be...

  1. Define xsort=[] and ysort=[] as arrays of your axes. Populate them by iterating through your data, or some other way.
  2. Define rs={} as a dict of dicts of your tabulated data, by iterating through your data and incrementing rs[yvalue][xvalue]. Create missing keys if/when needed.

Then for example the total for row y would be sum([rs[y][x] for x in xsort])

Since this is an early programming exercise for Python, they probably want you to see what Python built-in mechanisms would be appropriate for the initial version of the problem. The dictionary structure seems a good candidate. The first column value from your tab-sep file can be the key into a dictionary. The entry found by that key can itself be a dictionary, whose key is the second column value. The entries of the subdictionary would be a count, initialized to 1 when you add a new subdictionary when a pair is first encountered.

Why not store it using HTML Tables? It might not be the best, but you could then, very easily, view it in a browser.

Edit:

I just re-read the question and you're asking for data model, not a storage model. To answer that question...

It all depends on how you're going to be reporting on the data. For example if you're going to be doing a lot of pivoting or aggregation it might make more sense to store it in column major order, this way you can just sum a column to get counts, for example.

It'll help a lot if you explain what kind of information you're trying to extract.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top