sklearn Logistic Regression "ValueError: Found array with dim 3. Estimator expected <= 2."

PythonScikit LearnLogistic Regression

Python Problem Overview


I attempt to solve this problem 6 in this notebook. The question is to train a simple model on this data using 50, 100, 1000 and 5000 training samples by using the LogisticRegression model from sklearn.linear_model.

lr = LogisticRegression()
lr.fit(train_dataset,train_labels)

This is the code i trying to do and it give me the error.

> ValueError: Found array with dim 3. Estimator expected <= 2.

Any idea?

UPDATE 1: Update the link to the Jupyter Notebook.

Python Solutions


Solution 1 - Python

scikit-learn expects 2d num arrays for the training dataset for a fit function. The dataset you are passing in is a 3d array you need to reshape the array into a 2d.

nsamples, nx, ny = train_dataset.shape
d2_train_dataset = train_dataset.reshape((nsamples,nx*ny))

Solution 2 - Python

In LSTM, GRU, and TCN layers, the return_sequence in last layer before Dence Layer must set False . It is one of conditions that you encounter to this error message .

Solution 3 - Python

If anyone is stumbling onto this question from using LSTM or any RNN for two or more time series, this might be a solution.

However, to those who want error between two different values predicted, if for example you're trying to predict two completely different time series, then you can do the following:

from sklearn import mean_squared_error 
# Any sklearn function that takes 2D data only
# 3D data
real = np.array([
    [
        [1,60],
        [2,70],
        [3,80]
    ],
    [
        [2,70],
        [3,80],
        [4,90]
    ]
]) 

pred = np.array([
    [
        [1.1,62.1],
        [2.1,72.1],
        [3.1,82.1]
    ],
    [
        [2.1,72.1],
        [3.1,82.1],
        [4.1,92.1]
    ]
])

# Error/Some Metric on Feature 1:
print(mean_squared_error(real[:,:,0], pred[:,:,0]) # 0.1000

# Error/Some Metric on Feature 2:
print(mean_squared_error(real[:,:,1], pred[:,:,1]) # 2.0000

Additional Info from the numpy indexing

Solution 4 - Python

You probably have the last "lstm" layer in your model using "return_sequences=True". Change this to false to not return the output for further lstm models.

Solution 5 - Python

I had a similar Error by solving an image classification problem. We have a 3D matrix: the first dimension is the total number of images, can be replaced by "-1", the second dimension is the product of the height and the width of the picture, the third dimension is equal to three, since the RGB image has three channels (red, green blue). If we don't want to lose information about the color of the image, then we use x_train.reshape(-1, nxny3). If the color can be neglected and thereby reduce the size of the matrix: x_train.reshape(-1, nxny1)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionedwinView Question on Stackoverflow
Solution 1 - PythonKristian K.View Answer on Stackoverflow
Solution 2 - PythonAttilaView Answer on Stackoverflow
Solution 3 - PythonXavierView Answer on Stackoverflow
Solution 4 - PythonPhillip OteyView Answer on Stackoverflow
Solution 5 - PythonIryna TrygubView Answer on Stackoverflow