我正在尝试使用sklearn_pandas模块扩展我在熊猫中所做的工作,然后将脚趾浸入机器学习中,但是我正在努力挣扎,但我不太了解如何解决。

我正在通过以下数据集进行工作 Kaggle.

它本质上是一张未额外的表(1000行,40个功能),具有浮点值。

import pandas as pdfrom sklearn import neighbors
from sklearn_pandas import DataFrameMapper, cross_val_score
path_train ="../kaggle/scikitlearn/train.csv"
path_labels ="../kaggle/scikitlearn/trainLabels.csv"
path_test = "../kaggle/scikitlearn/test.csv"

train = pd.read_csv(path_train, header=None)
labels = pd.read_csv(path_labels, header=None)
test = pd.read_csv(path_test, header=None)
mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))])
mapper_train

输出:

DataFrameMapper(features=[([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
       n_neighbors=3, p=2, weights='uniform'))])

到目前为止,一切都很好。但是我尝试了合适的

mapper_train.fit_transform(train, labels)

输出:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-e3897d6db1b5> in <module>()
----> 1 mapper_train.fit_transform(train, labels)

//anaconda/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y,     **fit_params)
    409         else:
    410             # fit method of arity 2 (supervised transformation)
--> 411             return self.fit(X, y, **fit_params).transform(X)
    412 
    413 

//anaconda/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y)
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120 

TypeError: fit() takes exactly 3 arguments (2 given)`

我究竟做错了什么?尽管在这种情况下的数据都是相同的,但我打算为混合物,标称和浮点功能和Sklearn_pandas的混合物进行工作流程,似乎是合乎逻辑的。

有帮助吗?

解决方案

这是如何让熊猫和斯克勒恩玩得开心的示例

假设您有2列都是字符串,并且希望矢量化 - 但是您不知道哪种矢量化参数将导致最佳下游性能。

创建向量器

to_vect = Pipeline([('vect',CountVectorizer(min_df =1,max_df=.9,ngram_range=(1,2),max_features=1000)),
                    ('tfidf', TfidfTransformer())])

创建DataFrameMapper obj。

full_mapper = DataFrameMapper([
        ('col_name1', to_vect),
        ('col_name2',to_vect)
    ])

这是完整的管道

full_pipeline  = Pipeline([('mapper',full_mapper),('clf', SGDClassifier(n_iter=15, warm_start=True))])

定义您希望扫描考虑的参数

full_params = {'clf__alpha': [1e-2,1e-3,1e-4],
                   'clf__loss':['modified_huber','hinge'],
                   'clf__penalty':['l2','l1'],
                   'mapper__features':[[('col_name1',deepcopy(to_vect)),
                                        ('col_name2',deepcopy(to_vect))],
                                       [('col_name1',deepcopy(to_vect).set_params(vect__analyzer= 'char_wb')),
                                        ('col_name2',deepcopy(to_vect))]]}

而已! - 但是请注意,mapper_features是本字典中的一个项目 - 因此,请使用A进行循环或Itertools.product生成您希望考虑的所有TO_VECT选项的平面列表 - 但这是问题范围之外的一个独立任务。

继续创建最佳分类器或您的管道以结束

gs_clf = GridSearchCV(full_pipe, full_params, n_jobs=-1)

其他提示

我从未使用过 sklearn_pandas, ,但是从阅读他们的源代码,看来这是他们身边的错误。如果您寻找 引发例外的功能, ,您可以注意到他们正在丢弃 y 争论(直到docstring才能生存)和内在 fit 功能期望更多的论点,这可能是 y:

def fit(self, X, y=None):
    '''
    Fit a transformation from the pipeline

    X       the data to fit
    '''
    for columns, transformer in self.features:
        if transformer is not None:
            transformer.fit(self._get_col_subset(X, columns))
    return self

我建议您在 他们的错误跟踪器.

更新:

如果您从ipython运行代码,则可以测试。总结,如果您使用 %pdb on 在运行有问题的呼叫之前,魔术是由Python调试器捕获的例外,因此您可以稍稍播放,看看呼叫 fit 具有标签值的功能 y[0] 有效 - 查看最后一行 pdb> 迅速的。 (CSV文件是从Kaggle下载的,除了最大的文件,这只是真实文件的一部分)。

In [1]: import pandas as pd

In [2]: from sklearn import neighbors

In [3]: from sklearn_pandas import DataFrameMapper, cross_val_score

In [4]: path_train ="train.csv"

In [5]: path_labels ="trainLabels.csv"

In [6]: path_test = "test.csv"

In [7]: train = pd.read_csv(path_train, header=None)

In [8]: labels = pd.read_csv(path_labels, header=None)

In [9]: test = pd.read_csv(path_test, header=None)

In [10]: mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))])

In [13]: %pdb on

In [14]: mapper_train.fit_transform(train, labels)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-14-e3897d6db1b5> in <module>()
----> 1 mapper_train.fit_transform(train, labels)

/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y, **fit_params)
    409         else:
    410             # fit method of arity 2 (supervised transformation)
--> 411             return self.fit(X, y, **fit_params).transform(X)
    412
    413

/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y)
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120

TypeError: fit() takes exactly 3 arguments (2 given)
> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.py(118)fit()
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self

ipdb> l
    113
    114         X       the data to fit
    115         '''
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120
    121
    122     def transform(self, X):
    123         '''
ipdb> transformer.fit(self._get_col_subset(X, columns), y[0])
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           n_neighbors=3, p=2, weights='uniform')
许可以下: CC-BY-SA归因
scroll top