For low-cardinality categorical features it might be appropriate to use a one-hot encoding feature expansion. Have a look at:
- http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features
- http://scikit-learn.org/stable/modules/feature_extraction.html#loading-features-from-dicts
For high cardinality categorical features, you can keep the integer encoding for ExtraTreesClassifier. Even though the algorithm will treat them as regular continuous variables, it does not seem to impact the predictive accuracy too negatively in practice.
Edit: in any case scikit-learn expect homogeneous floating point type encoding for all the input features. The object dtype is never a valid input type.