質問

Using pandas, I have exported to a csv file a dataframe whose cells contain tuples of strings. The resulting file has the following structure:

index,colA
1,"('a','b')"
2,"('c','d')"

Now I want to read it back using read_csv. However whatever I try, pandas interprets the values as strings rather than tuples. For instance:

In []: import pandas as pd
       df = pd.read_csv('test',index_col='index',dtype={'colA':tuple})
       df.loc[1,'colA']
Out[]: "('a','b')"

Is there a way of telling pandas to do the right thing? Preferably without heavy post-processing of the dataframe: the actual table has 5000 rows and 2500 columns.

役に立ちましたか?

解決

Storing tuples in a column isn't usually a good idea; a lot of the advantages of using Series and DataFrames are lost. That said, you could use converters to post-process the string:

>>> df = pd.read_csv("sillytup.csv", converters={"colA": ast.literal_eval})
>>> df
   index    colA
0      1  (a, b)
1      2  (c, d)

[2 rows x 2 columns]
>>> df.colA.iloc[0]
('a', 'b')
>>> type(df.colA.iloc[0])
<type 'tuple'>

But I'd probably change things at source to avoid storing tuples in the first place.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top