ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)

PythonNumpy

Python Problem Overview


I have a list say, temp_list with following properties :

len(temp_list) = 9260  
temp_list[0].shape = (224,224,3)  

Now, when I am converting into numpy array,

x = np.array(temp_list)  

I am getting the error :

ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)  

Can someone help me here?

Python Solutions


Solution 1 - Python

At least one item in your list is either not three dimensional, or its second or third dimension does not match the other elements. If only the first dimension does not match, the arrays are still matched, but as individual objects, no attempt is made to reconcile them into a new (four dimensional) array. Some examples are below:

That is, the offending element's shape != (?, 224, 3),
or ndim != 3 (with the ? being non-negative integer).
That is what is giving you the error.

You'll need to fix that, to be able to turn your list into a four (or three) dimensional array. Without context, it is impossible to say if you want to lose a dimension from the 3D items or add one to the 2D items (in the first case), or change the second or third dimension (in the second case).


Here's an example of the error:

>>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224))]
>>> np.array(a)
ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)

or, different type of input, but the same error:

>>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224,13))]
>>> np.array(a)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)

Alternatively, similar but with a different error message:

>>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,100,3))]
>>> np.array(a)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: could not broadcast input array from shape (224,224,3) into shape (224)

But the following will work, albeit with different results than (presumably) intended:

>>> a = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((10,224,3))]
>>> np.array(a)
# long output omitted
>>> newa = np.array(a)
>>> newa.shape
3  # oops
>>> newa.dtype
dtype('O')
>>> newa[0].shape
(224, 224, 3)
>>> newa[1].shape
(224, 224, 3)
>>> newa[2].shape
(10, 224, 3)
>>> 

Solution 2 - Python

Yea, Indeed @Evert answer is perfectly correct. In addition I'll like to add one more reason that could encounter such error.

>>> np.array([np.zeros((20,200)),np.zeros((20,200)),np.zeros((20,200))])

This will be perfectly fine, However, This leads to error:

>>> np.array([np.zeros((20,200)),np.zeros((20,200)),np.zeros((20,201))])

ValueError: could not broadcast input array from shape (20,200) into shape (20)

The numpy arry within the list, must also be the same size.

Solution 3 - Python

You can covert numpy.ndarray to object using astype(object)

This will work:

>>> a = [np.zeros((224,224,3)).astype(object), np.zeros((224,224,3)).astype(object), np.zeros((224,224,13)).astype(object)]

Solution 4 - Python

I was facing the same problem because some of the images are grey scale images in my data set, so i solve my problem by doing this

    from PIL import Image
    img = Image.open('my_image.jpg').convert('RGB')
    # a line from my program
    positive_images_array = np.array([np.array(Image.open(img).convert('RGB').resize((150, 150), Image.ANTIALIAS)) for img in images_in_yes_directory])







Solution 5 - Python

@aravk33 's answer is absolutely correct.

I was going through the same problem. I had a data set of 2450 images. I just could not figure out why I was facing this issue.

Check the dimensions of all the images in your training data.

Add the following snippet while appending your image into your list:

if image.shape==(1,512,512):
    trainx.append(image)

Solution 6 - Python

This method does not need to modify dtype or ravel your numpy array.

The core idea is: 1.initialize with one extra row. 2.change the list(which has one more row) to array 3.delete the extra row in the result array e.g.

>>> a = [np.zeros((10,224)), np.zeros((10,))]
>>> np.array(a)
# this will raise error,
ValueError: could not broadcast input array from shape (10,224) into shape (10)

# but below method works
>>> a = [np.zeros((11,224)), np.zeros((10,))]
>>> b = np.array(a)
>>> b[0] = np.delete(b[0],0,0)
>>> print(b.shape,b[0].shape,b[1].shape)
# print result:(2,) (10,224) (10,)

Indeed, it's not necessarily to add one more row, as long as you can escape from the gap stated in @aravk33 and @user707650 's answer and delete the extra item later, it will be fine.

Solution 7 - Python

SOLVED - I got same arror : X_test = np.array(X_test, ) ValueError: could not broadcast input array from shape (50,50,3) into shape (50,50) -> printed every images shape and got like this:

~

1708 : (50, 50, 3)

1709 : (50, 50)

1710 : (50, 50)

1711 : (50, 50, 3)

1712 : (50, 50, 3)

1713 : (50, 50, 3)

~

which means Mixed 1D and 3D datas after reading 2 different image folders and shuffling them

img: first one is Grayscale and second one is Color image

Added cv2.IMREAD_GRAYSCALE and problem is solved

Summary: in image data which I wanted to convert into np array contained different dimensional images

-> checked image data

-> found out that there are 1D, and 3D images

-> made 3D images Grayscale(1D)

-> problem is solved

Solution 8 - Python

In my case problem was in my data set, Basically i was need to pre process on my data before further processing, because in my data set images are in random formats like RGB and grayscale, So dimensions mismatch. I simply follow Mudasir Habib's answer.

from PIL import Image
img = Image.open('my_image.jpg').convert('RGB')

Solution 9 - Python

I totally agree with @mudassir's Answer. If you have agumented your dataset, then its highly likely that you get this error. As in most of the augumentation, it automatically applies grayscale effect which is actually two-dimensional whereas the original pictures (RGB) are three-dimensional. I myself was using roboflow's dataset that was already augumented and had the similar issue. I then removed the "graysclaing step" and still it gave the error. However, one i removed grayscale, hue and saturation, it worked like a charm. I would suggest you try that too.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionneelView Question on Stackoverflow
Solution 1 - Pythonuser707650View Answer on Stackoverflow
Solution 2 - PythonJagesh MaharjanView Answer on Stackoverflow
Solution 3 - PythonYinjie GaoView Answer on Stackoverflow
Solution 4 - PythonMudasir HabibView Answer on Stackoverflow
Solution 5 - PythonNaman BansalView Answer on Stackoverflow
Solution 6 - PythonWang WeiView Answer on Stackoverflow
Solution 7 - PythonZarina AbdibaitovaView Answer on Stackoverflow
Solution 8 - PythoncodingPhobiaView Answer on Stackoverflow
Solution 9 - PythonHassan ShahzadView Answer on Stackoverflow