Question

I'm working on a machine learning algorithm for the dataset available here.

There are 26 columns of data. Most of it is pointless. How can I effectively and quickly determine what features are interesting - what features tell me one way or another whether a given URL is ephemeral or evergreen (which is the dependent variable in the dataset)? Are there intelligent, programmatic scikit learns ways of doing this or is it simply a case of graphic each feature against the dependent feature ('label', 26th column) and seeing what has an effect?

Surely there's a better way than this!

Can anyone help? :)

Edit : Some code for a classifier that I have found - how can I print out the weights given to each feature here ?

import numpy as np
import matplotlib.pyplot as plt
  from sklearn import metrics,preprocessing,cross_validation
  from sklearn.feature_extraction.text import TfidfVectorizer
  import sklearn.linear_model as lm
  import pandas as p
  loadData = lambda f: np.genfromtxt(open(f,'r'), delimiter=' ')

  print "loading data.."
  traindata = list(np.array(p.read_table('train.tsv'))[:,2])
  testdata = list(np.array(p.read_table('test.tsv'))[:,2])
  y = np.array(p.read_table('train.tsv'))[:,-1]

  tfv = TfidfVectorizer(min_df=3,  max_features=None, strip_accents='unicode',  
        analyzer='word',token_pattern=r'\w{1,}',ngram_range=(1, 2), use_idf=1,smooth_idf=1,sublinear_tf=1)

  rd = lm.LogisticRegression(penalty='l2', dual=True, tol=0.0001, 
                             C=1, fit_intercept=True, intercept_scaling=1.0, 
                             class_weight=None, random_state=None)

  X_all = traindata + testdata
  lentrain = len(traindata)

  print "fitting pipeline"
  tfv.fit(X_all)
  print "transforming data"
  X_all = tfv.transform(X_all)

  X = X_all[:lentrain]
  X_test = X_all[lentrain:]

  print "20 Fold CV Score: ", np.mean(cross_validation.cross_val_score(rd, X, y, cv=20, scoring='roc_auc'))

  print "training on full data"
  rd.fit(X,y)
  pred = rd.predict_proba(X_test)[:,1]
  testfile = p.read_csv('test.tsv', sep="\t", na_values=['?'], index_col=1)
  pred_df = p.DataFrame(pred, index=testfile.index, columns=['label'])
  pred_df.to_csv('benchmark.csv')
  print "submission file created.."
Was it helpful?

Solution

Many fitted scikit-learn estimators have an attribute feature_importances_ (except linear models, which call it coef_) containing some kind of feature weights. Higher weights generally mean features contribute more to the final prediction, which can be interpreted as those features being more predictive.(*)

These attributes hold NumPy arrays, with shape either (n_features,) for binary classification, regression and non-linear models, or (n_features, n_classes) for multiclass linear models.

See the document classification example for how to use these attributes.

(*) All the usual caveats about overfitting apply: in a bad model, the wrong features may get the higher weights.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top