While with enough stacking tricks you might be able to do this all in one go, I don't think it'd be worth it. You have a pivot operation and a bunch of groupby operations. So do them separately -- which is easy -- and then combine the results.
Step #1 is to make Score
a float column; it's better to get the types right before you start processing.
>>> df["Score"] = df["Score"].astype(float)
Then we'll make a new frame with the groupby-like columns. We could do this by passing .agg
a dictionary but we'd have to rename the columns afterwards anyway, so there's not much point.
>>> gg = df.groupby("Location")
>>> summ = pd.DataFrame({"Pop": gg.Location.count(),
... "HH": gg.Address.nunique(),
... "L4": gg.Score.apply(lambda x: (x < 4).sum())})
>>> summ
HH L4 Pop
Location
DC 1 1 1
NY 2 2 3
SF 2 1 3
TX 3 2 3
[4 rows x 3 columns]
Then we can pivot:
>>> class_info = df.pivot_table(rows="Location", cols="Class", aggfunc='size', fill_value=0)
>>> class_info
Class H L M
Location
DC 0 0 1
NY 2 1 0
SF 1 2 0
TX 1 2 0
[4 rows x 3 columns]
and combine:
>>> new_df = pd.concat([summ, class_info], axis=1)
>>> new_df
HH L4 Pop H L M
Location
DC 1 1 1 0 0 1
NY 2 2 3 2 1 0
SF 2 1 3 1 2 0
TX 3 2 3 1 2 0
[4 rows x 6 columns]
You can reorder this as you like.