If you absolutely can't simply download the files for some reason, and have to keep pulling them from a remote source (which isn't very neighbourly), then you could do it all in-memory if needed:
>>> import pandas as pd
>>> import io
>>> import urllib2
>>> import bz2
>>>
>>> url = "https://github.com/hawkw/traverse/blob/master/data/-Users-hawk_2014-02-06.csv.bz2?raw=true"
>>> raw_data = urllib2.urlopen(url).read()
>>> data = bz2.decompress(raw_data)
>>> df = pd.read_csv(io.BytesIO(data))
>>> df.head()
path st_mode st_ino \
0 /Users/hawk/Library/minecraft/bin/minecraft/mo... 33261 59612469
1 /Users/hawk/Library/Application Support/Google... 16832 91818463
2 /Users/hawk/Library/Caches/Metadata/Safari/His... 33188 95398522
3 /Users/hawk/Documents/Minecraft1.6.4/assets/so... 33188 90620503
4 /Users/hawk/Library/Caches/Metadata/Safari/His... 33188 96129272
st_dev st_nlink st_uid st_gid st_size st_atime st_mtime \
0 16777219 1 501 20 2626 1370201925 1366983504
1 16777219 3 501 20 102 1391697388 1384638452
2 16777219 1 501 20 36758 1389032348 1389032363
3 16777219 1 501 20 12129 1387000073 1384114141
4 16777219 1 501 20 170 1390545751 1390545751
st_ctime
0 1368736019
1 1384638452
2 1389032363
3 1384114141
4 1390545751
[5 rows x 11 columns]
where you'd want to pass the appropriate encoding
parameter.