Question

I have a dataset and i have used Support Vector Regression.So i needed to use StandardScaler module from sklearn.preprocessing fro Feature Scaling. After training my model when i came to predict it was giving a prediction which was Feature scaled.That's why i used inverse_transformfrom StandardScaler() and getting a error saying

NotFittedError: This StandardScaler instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.

I have tried several solutions but it's getting the same error. What can i do now?

My dataset

Here is my code :

import numpy as np
import pandas as pd
import seaborn as sbn
import matplotlib.pyplot as plt
df = pd.read_csv('Position_Salaries.csv')
x = df.iloc[:,1:2].values
y = df.iloc[:,2:].values
from sklearn.preprocessing import StandardScaler
x = StandardScaler().fit_transform(x)
y = StandardScaler().fit_transform(y)
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
regressor.fit(x,y)
y_pred = regressor.predict(StandardScaler().fit_transform(np.array([[6.5]])))
y_pred = StandardScaler().inverse_transform(y_pred)

Error log : Error Log

Was it helpful?

Solution

You are trying to scale just one record, so you need to save the Scaler fitted on the training data

sc = StandardScaler()
sc.fit(x)
x = sc.transform(x)
y_pred = regressor.predict(sc.transform(np.array([[6.5]]))

Make sure that the number of features is same in both cases otherwise you will get other errors.

A Working example

from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn import datasets

iris = datasets.load_iris()
X = iris.data 

sc = StandardScaler()
sc.fit(X)
x = sc.transform(X)
#On new data, though data count is one but Features count is still Four
sc.transform(np.array([[6.5, 1.5, 2.5, 6.5]]))
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top