How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?

PythonNumpyPandasScikit Learn

Python Problem Overview


I want to apply scaling (using StandardScaler() from sklearn.preprocessing) to a pandas dataframe. The following code returns a numpy array, so I lose all the column names and indeces. This is not what I want.

features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features = autoscaler.fit_transform(features)

A "solution" I found online is:

features = features.apply(lambda x: autoscaler.fit_transform(x))

It appears to work, but leads to a deprecationwarning:

> /usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583: > DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 > and will raise ValueError in 0.19. Reshape your data either using > X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) > if it contains a single sample.

I therefore tried:

features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1)))

But this gives:

> Traceback (most recent call last): File "./analyse.py", line 91, in > > features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) File > "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 3972, in > apply > return self._apply_standard(f, axis, reduce=reduce) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 4081, in > _apply_standard > result = self._constructor(data=results, index=index) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 226, in > init > mgr = self._init_dict(data, index, columns, dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 363, in > _init_dict > dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5163, in > _arrays_to_mgr > arrays = _homogenize(arrays, index, dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5477, in > _homogenize > raise_cast_failure=False) File "/usr/lib/python3.5/site-packages/pandas/core/series.py", line 2885, > in _sanitize_array > raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional

How do I apply scaling to the pandas dataframe, leaving the dataframe intact? Without copying the data if possible.

Python Solutions


Solution 1 - Python

You could convert the DataFrame as a numpy array using as_matrix(). Example on a random dataset:

Edit: Changing as_matrix() to values, (it doesn't change the result) per the last sentence of the as_matrix() docs above: >Generally, it is recommended to use ‘.values’.

import pandas as pd
import numpy as np #for the random integer example
df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
              index=range(10,20),
              columns=['col1','col2','col3','col4'],
              dtype='float64')

Note, indices are 10-19:

In [14]: df.head(3)
Out[14]:
	col1 	col2 	col3 	col4
    10 	3 	38 	86 	65
    11 	98 	3 	66 	68
    12 	88 	46 	35 	68

Now fit_transform the DataFrame to get the scaled_features array:

from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(df.values)

In [15]: scaled_features[:3,:] #lost the indices
Out[15]:
array([[-1.89007341,  0.05636005,  1.74514417,  0.46669562],
       [ 1.26558518, -1.35264122,  0.82178747,  0.59282958],
       [ 0.93341059,  0.37841748, -0.60941542,  0.59282958]])

Assign the scaled data to a DataFrame (Note: use the index and columns keyword arguments to keep your original indices and column names:

scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

In [17]:  scaled_features_df.head(3)
Out[17]:
    col1    col2    col3    col4
10  -1.890073   0.056360    1.745144    0.466696
11  1.265585    -1.352641   0.821787    0.592830
12  0.933411    0.378417    -0.609415   0.592830

Edit 2:

Came across the sklearn-pandas package. It's focused on making scikit-learn easier to use with pandas. sklearn-pandas is especially useful when you need to apply more than one type of transformation to column subsets of the DataFrame, a more common scenario. It's documented, but this is how you'd achieve the transformation we just performed.

from sklearn_pandas import DataFrameMapper

mapper = DataFrameMapper([(df.columns, StandardScaler())])
scaled_features = mapper.fit_transform(df.copy(), 4)
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

Solution 2 - Python

import pandas as pd    
from sklearn.preprocessing import StandardScaler

df = pd.read_csv('your file here')
ss = StandardScaler()
df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)

The df_scaled will be the 'same' dataframe, only now with the scaled values

Solution 3 - Python

Reassigning back to df.values preserves both index and columns.

df.values[:] = StandardScaler().fit_transform(df)

Solution 4 - Python

features = ["col1", "col2", "col3", "col4"]
autoscaler = StandardScaler()
df[features] = autoscaler.fit_transform(df[features])

Solution 5 - Python

This worked with MinMaxScaler in getting back the array values to original dataframe. It should work on StandardScaler as well.

data_scaled = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

where, data_scaled is the new data frame, scaled_features = the array post normalization, df = original dataframe for which we need the index and columns back.

Solution 6 - Python

This is what I did:

X.Column1 = StandardScaler().fit_transform(X.Column1.values.reshape(-1, 1))

Solution 7 - Python

Works for me:

from sklearn.preprocessing import StandardScaler

cols = list(train_df_x_num.columns)
scaler = StandardScaler()
train_df_x_num[cols] = scaler.fit_transform(train_df_x_num[cols])

Solution 8 - Python

You can mix multiple data types in scikit-learn using Neuraxle:

Option 1: discard the row names and column names

from neuraxle.pipeline import Pipeline
from neuraxle.base import NonFittableMixin, BaseStep

class PandasToNumpy(NonFittableMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        return data_inputs.values

pipeline = Pipeline([
    PandasToNumpy(),
    StandardScaler(),
])

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
pipeline, scaled_features = pipeline.fit_transform(features)

Option 2: to keep the original column names and row names

You could even do this with a wrapper as such:

from neuraxle.pipeline import Pipeline
from neuraxle.base import MetaStepMixin, BaseStep

class PandasValuesChangerOf(MetaStepMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        new_data_inputs = self.wrapped.transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return new_data_inputs

    def fit_transform(self, data_inputs, expected_outputs): 
        self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return self, new_data_inputs

    def _merge(self, data_inputs, new_data_inputs): 
        new_data_inputs = pd.DataFrame(
            new_data_inputs,
            index=data_inputs.index,
            columns=data_inputs.columns
        )
        return new_data_inputs

df_scaler = PandasValuesChangerOf(StandardScaler())

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
df_scaler, scaled_features = df_scaler.fit_transform(features)

Solution 9 - Python

You can try this code, this will give you a DataFrame with indexes

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston # boston housing dataset

dt= load_boston().data
col= load_boston().feature_names

# Make a dataframe
df = pd.DataFrame(data=dt, columns=col)

# define a method to scale data, looping thru the columns, and passing a scaler
def scale_data(data, columns, scaler):
    for col in columns:
        data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
    return data

# specify a scaler, and call the method on boston data
scaler = StandardScaler()
df_scaled = scale_data(df, col, scaler)

# view first 10 rows of the scaled dataframe
df_scaled[0:10]

Solution 10 - Python

You could directly assign a numpy array to a data frame by using slicing.

from sklearn.preprocessing import StandardScaler
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features[:] = autoscaler.fit_transform(features.values)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionLouicView Question on Stackoverflow
Solution 1 - PythonKevinView Answer on Stackoverflow
Solution 2 - PythonJoeView Answer on Stackoverflow
Solution 3 - PythonJimView Answer on Stackoverflow
Solution 4 - PythonzzHQzzView Answer on Stackoverflow
Solution 5 - Pythonuser15590289View Answer on Stackoverflow
Solution 6 - PythonFredrikView Answer on Stackoverflow
Solution 7 - PythonAvtandil ChakhnashviliView Answer on Stackoverflow
Solution 8 - PythonGuillaume ChevalierView Answer on Stackoverflow
Solution 9 - PythonHassan KView Answer on Stackoverflow
Solution 10 - PythonabyssloverView Answer on Stackoverflow