Question

This is very similar to this question: Finding minimum, maximum and average values for nested lists?

The important difference and root of the question being I would like to find min, max, average within a list (nested within a list) for each unique column name (person's name).

For example: Each line is basically (with similar fictitious names)-

epochtime, name, score, level, extralives 

e.g.

    1234455, suzy, 120, 3, 0
    1234457, billy, 123, 1, 2
    1234459, billy, 124, 2, 4
    1234459, suzy, 224, 5, 4
    1234460, suzy, 301, 7, 1
    1234461, billy, 201, 3, 1

And these are placed in a list by time:

if epochtime < 1234500 and epochtime > 1234400:
        timechunk1.append(line)

There's a list of lists with each of the time chunks:

listoflists = [timechunk1, timechunk2....]

This may or may not be overkill / extraneous for this question.

How do I find min, max, average values for each field (score, level, extralives) for each unique name (billy or suzy - there are many more than just billy or suzy, so it'd be much better not to list them individually) in each list (timechunk1, timechunk2)?

Was it helpful?

Solution

pandas example:

>>> import pandas as pd
>>> df = pd.read_csv("grouped.csv", sep="[,\s]*")
>>> df
   epochtime   name  score  level  extralives
0    1234455   suzy    120      3           0
1    1234457  billy    123      1           2
2    1234459  billy    124      2           4
3    1234459   suzy    224      5           4
4    1234460   suzy    301      7           1
5    1234461  billy    201      3           1
>>> g = df.groupby("name").describe()
>>> g
                  epochtime       score  level  extralives
name                                                      
billy count        3.000000    3.000000    3.0    3.000000
      mean   1234459.000000  149.333333    2.0    2.333333
      std          2.000000   44.747439    1.0    1.527525
      min    1234457.000000  123.000000    1.0    1.000000
      25%    1234458.000000  123.500000    1.5    1.500000
      50%    1234459.000000  124.000000    2.0    2.000000
      75%    1234460.000000  162.500000    2.5    3.000000
      max    1234461.000000  201.000000    3.0    4.000000
suzy  count        3.000000    3.000000    3.0    3.000000
      mean   1234458.000000  215.000000    5.0    1.666667
      std          2.645751   90.835015    2.0    2.081666
      min    1234455.000000  120.000000    3.0    0.000000
      25%    1234457.000000  172.000000    4.0    0.500000
      50%    1234459.000000  224.000000    5.0    1.000000
      75%    1234459.500000  262.500000    6.0    2.500000
      max    1234460.000000  301.000000    7.0    4.000000

Or simply:

>>> df.groupby("name").mean()
       epochtime       score  level  extralives
name                                           
billy    1234459  149.333333      2    2.333333
suzy     1234458  215.000000      5    1.666667

And then:

>>> g.ix[("billy","mean")]
epochtime     1234459.000000
score             149.333333
level               2.000000
extralives          2.333333
Name: (billy, mean), dtype: float64
>>> g.ix[("billy","mean")]["score"]
149.33333333333334
>>> g["score"]
name        
billy  count      3.000000
       mean     149.333333
       std       44.747439
       min      123.000000
       25%      123.500000
       50%      124.000000
       75%      162.500000
       max      201.000000
suzy   count      3.000000
       mean     215.000000
       std       90.835015
       min      120.000000
       25%      172.000000
       50%      224.000000
       75%      262.500000
       max      301.000000
Name: score, dtype: float64

Et cetera. If you're thinking in R/SQL ways, but want to use Python, then definitely give pandas a try.

Note that you can also do multi-column groupbys:

>>> df.groupby(["epochtime", "name"]).mean()
                 score  level  extralives
epochtime name                           
1234455   suzy     120      3           0
1234457   billy    123      1           2
1234459   billy    124      2           4
          suzy     224      5           4
1234460   suzy     301      7           1
1234461   billy    201      3           1

OTHER TIPS

You'd have to collect per-name, per-field lists.

Using collections.defaultdict with a factory to create nested lists:

from collections import defaultdict

columns = ('score', 'level', 'extralives')

def per_user_data():
    return {k: [] for k in columns}

stats_per_timechunk = []

for timechunk in listoflists:
    # group data per user, per column (user -> {c1: [], c2: [], c3: []})
    data = defaultdict(per_user_data)    
    for userdata in timechunk:
        per_user = data[userdata[1]]
        for column, value in zip(columns, userdata[2:]):
            per_user[column].append(value)

    # collect min, max and average stats per user, per column 
    # (user -> {c1: {min: 0, max: 0, avg: 0}, ..})
    stats = {}

    for user, per_user in data.iteritems():
        stats[user] = {column: {
                'min': min(per_user[column]),
                'max': max(per_user[column]),
                'avg': sum(per_user[column]) / float(len(per_user[column])),
            } for column in columns}

    stats_per_timechunk.append(stats)

Dumping your example input data into one timechunk gives me:

>>> pprint(stats_per_timechunk)
[{'billy': {'extralives': {'avg': 2.3333333333333335, 'max': 4, 'min': 1},
            'level': {'avg': 2.0, 'max': 3, 'min': 1},
            'score': {'avg': 149.33333333333334, 'max': 201, 'min': 123}},
  'suzy': {'extralives': {'avg': 1.6666666666666667, 'max': 4, 'min': 0},
           'level': {'avg': 5.0, 'max': 7, 'min': 3},
           'score': {'avg': 215.0, 'max': 301, 'min': 120}}}]

Perhaps you should consider using a different data structure instead of all these lists, or use something like pandas to help you analyze the data more efficiently.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top