Pytorch reshape tensor dimension

PythonNumpyDeep LearningPytorchTensor

Python Problem Overview


For example, I have 1D vector with dimension (5). I would like to reshape it into 2D matrix (1,5).

Here is how I do it with numpy

>>> import numpy as np
>>> a = np.array([1,2,3,4,5])
>>> a.shape
(5,)
>>> a = np.reshape(a, (1,5))
>>> a.shape
(1, 5)
>>> a
array([[1, 2, 3, 4, 5]])
>>> 

But how can I do that with Pytorch Tensor (and Variable). I don't want to switch back to numpy and switch to Torch variable again, because it will loss backpropagation information.

Here is what I have in Pytorch

>>> import torch
>>> from torch.autograd import Variable
>>> a = torch.Tensor([1,2,3,4,5])
>>> a

 1
 2
 3
 4
 5
[torch.FloatTensor of size 5]

>>> a.size()
(5L,)
>>> a_var = variable(a)
>>> a_var = Variable(a)
>>> a_var.size()
(5L,)
.....do some calculation in forward function
>>> a_var.size()
(5L,)

Now I want it size to be (1, 5). How can I resize or reshape the dimension of pytorch tensor in Variable without loss grad information. (because I will feed into another model before backward)

Python Solutions


Solution 1 - Python

Use torch.unsqueeze(input, dim, out=None)

>>> import torch
>>> a = torch.Tensor([1,2,3,4,5])
>>> a

 1
 2
 3
 4
 5
[torch.FloatTensor of size 5]

>>> a = a.unsqueeze(0)
>>> a

 1  2  3  4  5
[torch.FloatTensor of size 1x5]

Solution 2 - Python

you might use

a.view(1,5)
Out: 

 1  2  3  4  5
[torch.FloatTensor of size 1x5]

Solution 3 - Python

There are multiple ways of reshaping a PyTorch tensor. You can apply these methods on a tensor of any dimensionality.

Let's start with a 2-dimensional 2 x 3 tensor:

x = torch.Tensor(2, 3)
print(x.shape)
# torch.Size([2, 3])

To add some robustness to this problem, let's reshape the 2 x 3 tensor by adding a new dimension at the front and another dimension in the middle, producing a 1 x 2 x 1 x 3 tensor.

Approach 1: add dimension with None

Use NumPy-style insertion of None (aka np.newaxis) to add dimensions anywhere you want. See here.

print(x.shape)
# torch.Size([2, 3])

y = x[None, :, None, :] # Add new dimensions at positions 0 and 2.
print(y.shape)
# torch.Size([1, 2, 1, 3])

Approach 2: unsqueeze

Use torch.Tensor.unsqueeze(i) (a.k.a. torch.unsqueeze(tensor, i) or the in-place version unsqueeze_()) to add a new dimension at the i'th dimension. The returned tensor shares the same data as the original tensor. In this example, we can use unqueeze() twice to add the two new dimensions.

print(x.shape)
# torch.Size([2, 3])

# Use unsqueeze twice.
y = x.unsqueeze(0) # Add new dimension at position 0
print(y.shape)
# torch.Size([1, 2, 3])

y = y.unsqueeze(2) # Add new dimension at position 2
print(y.shape)
# torch.Size([1, 2, 1, 3])

In practice with PyTorch, adding an extra dimension for the batch may be important, so you may often see unsqueeze(0).

Approach 3: view

Use torch.Tensor.view(*shape) to specify all the dimensions. The returned tensor shares the same data as the original tensor.

print(x.shape)
# torch.Size([2, 3])

y = x.view(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])

Approach 4: reshape

Use torch.Tensor.reshape(*shape) (aka torch.reshape(tensor, shapetuple)) to specify all the dimensions. If the original data is contiguous and has the same stride, the returned tensor will be a view of input (sharing the same data), otherwise it will be a copy. This function is similar to the NumPy reshape() function in that it lets you define all the dimensions and can return either a view or a copy.

print(x.shape)
# torch.Size([2, 3])

y = x.reshape(1, 2, 1, 3)
print(y.shape)
# torch.Size([1, 2, 1, 3])

Furthermore, from the O'Reilly 2019 book Programming PyTorch for Deep Learning, the author writes:

Now you might wonder what the difference is between view() and reshape(). The answer is that view() operates as a view on the original tensor, so if the underlying data is changed, the view will change too (and vice versa). However, view() can throw errors if the required view is not contiguous; that is, it doesn’t share the same block of memory it would occupy if a new tensor of the required shape was created from scratch. If this happens, you have to call tensor.contiguous() before you can use view(). However, reshape() does all that behind the scenes, so in general, I recommend using reshape() rather than view().

Approach 5: resize_

Use the in-place function torch.Tensor.resize_(*sizes) to modify the original tensor. The documentation states:

WARNING. This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_().

print(x.shape)
# torch.Size([2, 3])

x.resize_(1, 2, 1, 3)
print(x.shape)
# torch.Size([1, 2, 1, 3])

My observations

If you want to add just one dimension (e.g. to add a 0th dimension for the batch), then use unsqueeze(0). If you want to totally change the dimensionality, use reshape().

See also:

What's the difference between reshape and view in pytorch?

What is the difference between view() and unsqueeze()?

In PyTorch 0.4, is it recommended to use reshape than view when it is possible?

Solution 4 - Python

For in-place modification of the shape of the tensor, you should use tensor.resize_():

In [23]: a = torch.Tensor([1, 2, 3, 4, 5])

In [24]: a.shape
Out[24]: torch.Size([5])


# tensor.resize_((`new_shape`))    
In [25]: a.resize_((1,5))
Out[25]: 

 1  2  3  4  5
[torch.FloatTensor of size 1x5]

In [26]: a.shape
Out[26]: torch.Size([1, 5])

In PyTorch, if there's an underscore at the end of an operation (like tensor.resize_()) then that operation does in-place modification to the original tensor.


Also, you can simply use np.newaxis in a torch Tensor to increase the dimension. Here is an example:

In [34]: list_ = range(5)
In [35]: a = torch.Tensor(list_)
In [36]: a.shape
Out[36]: torch.Size([5])

In [37]: new_a = a[np.newaxis, :]
In [38]: new_a.shape
Out[38]: torch.Size([1, 5])

Solution 5 - Python

or you can use this, the '-1' means you don't have to specify the number of the elements.

In [3]: a.view(1,-1)
Out[3]:

 1  2  3  4  5
[torch.FloatTensor of size 1x5]

Solution 6 - Python

This question has been thoroughly answered already, but I want to add for the less experienced python developers that you might find the * operator helpful in conjunction with view().

For example if you have a particular tensor size that you want a different tensor of data to conform to, you might try:

img = Variable(tensor.randn(20,30,3)) # tensor with goal shape
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size)) # data tensor

X = X.view(-1, *img.size()) # sweet maneuver
print(X.size()) # size is (50, 20, 30, 3)

This works with numpy shape too:

img = np.random.randn(20,30,3)
flat_size = 20*30*3
X = Variable(tensor.randn(50, flat_size))
X = X.view(-1, *img.shape)
print(X.size()) # size is (50, 20, 30, 3)

Solution 7 - Python

torch.reshape() is made to dupe the numpy reshape method.

It came after the view() and torch.resize_() and it is inside the dir(torch) package.

import torch
x=torch.arange(24)
print(x, x.shape)
x_view = x.view(1,2,3,4) # works on is_contiguous() tensor
print(x_view.shape)
x_reshaped = x.reshape(1,2,3,4) # works on any tensor
print(x_reshaped.shape)
x_reshaped2 = torch.reshape(x_reshaped, (-1,)) # part of torch package, while view() and resize_() are not
print(x_reshaped2.shape)

Out:

tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17,
        18, 19, 20, 21, 22, 23]) torch.Size([24])
torch.Size([1, 2, 3, 4])
torch.Size([1, 2, 3, 4])
torch.Size([24])

But did you know it can also work as a replacement for squeeze() and unsqueeze()

x = torch.tensor([1, 2, 3, 4])
print(x.shape)
x1 = torch.unsqueeze(x, 0)
print(x1.shape)
x2 = torch.unsqueeze(x1, 1)
print(x2.shape)
x3=x.reshape(1,1,4)
print(x3.shape)
x4=x.reshape(4)
print(x4.shape)
x5=x3.squeeze()
print(x5.shape)

Out:

torch.Size([4])
torch.Size([1, 4])
torch.Size([1, 1, 4])
torch.Size([1, 1, 4])
torch.Size([4])
torch.Size([4])

Solution 8 - Python

import torch
>>>a = torch.Tensor([1,2,3,4,5])
>>>a.size()
torch.Size([5])
#use view to reshape

>>>b = a.view(1,a.shape[0])
>>>b
tensor([[1., 2., 3., 4., 5.]])
>>>b.size()
torch.Size([1, 5])
>>>b.type()
'torch.FloatTensor'

Solution 9 - Python

As far as I know, the best way to reshape tensors is to use einops. It solves various reshape problems by providing a simple and elegant function. In your situation, the code could be written as

from einops import rearrange
ans = rearrange(tensor,'h -> 1 h')

I highly recommend you try it.

BTW, you can use it with pytorch/tensorflow/numpy and many other libraries.

Solution 10 - Python

Assume the following code:

import torch
import numpy as np
a = torch.tensor([1, 2, 3, 4, 5])

The following three calls have the exact same effect:

res_1 = a.unsqueeze(0)
res_2 = a.view(1, 5)
res_3 = a[np.newaxis,:]
res_1.shape == res_2.shape == res_3.shape == (1,5)  # Returns true

Notice that for any of the resulting tensors, if you modify the data in them, you are also modifying the data in a, because they don't have a copy of the data, but reference the original data in a.

res_1[0,0] = 2
a[0] == res_1[0,0] == 2  # Returns true

The other way of doing it would be using the resize_ in place operation:

a.shape == res_1.shape  # Returns false
a.reshape_((1, 5))
a.shape == res_1.shape # Returns true

Be careful of using resize_ or other in-place operation with autograd. See the following discussion: https://pytorch.org/docs/stable/notes/autograd.html#in-place-operations-with-autograd

Solution 11 - Python

import torch
t = torch.ones((2, 3, 4))
t.size()
>>torch.Size([2, 3, 4])
a = t.view(-1,t.size()[1]*t.size()[2])
a.size()
>>torch.Size([2, 12])

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionHaha TTproView Question on Stackoverflow
Solution 1 - PythonHaha TTproView Answer on Stackoverflow
Solution 2 - PythonLelikView Answer on Stackoverflow
Solution 3 - Pythonstackoverflowuser2010View Answer on Stackoverflow
Solution 4 - Pythonkmario23View Answer on Stackoverflow
Solution 5 - PythonMou CaiView Answer on Stackoverflow
Solution 6 - Pythonsaetch_gView Answer on Stackoverflow
Solution 7 - PythonprostiView Answer on Stackoverflow
Solution 8 - PythonhindamoshView Answer on Stackoverflow
Solution 9 - Pythone_yiView Answer on Stackoverflow
Solution 10 - PythonJadiel de ArmasView Answer on Stackoverflow
Solution 11 - PythonKamran AbdiView Answer on Stackoverflow