Keras Conv2D and input channels

PythonKeras

Python Problem Overview


The Keras layer documentation specifies the input and output sizes for convolutional layers: https://keras.io/layers/convolutional/

Input shape: (samples, channels, rows, cols)

Output shape: (samples, filters, new_rows, new_cols)

And the kernel size is a spatial parameter, i.e. detemines only width and height.

So an input with c channels will yield an output with filters channels regardless of the value of c. It must therefore apply 2D convolution with a spatial height x width filter and then aggregate the results somehow for each learned filter.

What is this aggregation operator? is it a summation across channels? can I control it? I couldn't find any information on the Keras documentation.

Thanks.

Python Solutions


Solution 1 - Python

It might be confusing that it is called Conv2D layer (it was to me, which is why I came looking for this answer), because as Nilesh Birari commented:

> I guess you are missing it's 3D kernel [width, height, depth]. So the result is summation across channels.

Perhaps the 2D stems from the fact that the kernel only slides along two dimensions, the third dimension is fixed and determined by the number of input channels (the input depth).

For a more elaborate explanation, read https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/

I plucked an illustrative image from there:

kernel depth

Solution 2 - Python

I was also wondering this, and found another answer here, where it is stated (emphasis mine):

> Maybe the most tangible example of a multi-channel input is when you have a color image which has 3 RGB channels. Let's get it to a convolution layer with 3 input channels and 1 output channel. (...) What it does is that it calculates the convolution of each filter with its corresponding input channel (...). The stride of all channels are the same, so they output matrices with the same size. Now, it sums up all matrices and output a single matrix which is the only channel at the output of the convolution layer.

Illustration:

enter image description here

Notice that the weights of the convolution kernels for each channel are different, which are then iteratively adjusted in the back-propagation steps by e.g. gradient decent based algorithms such as stochastic gradient descent (SDG).

Here is a more technical answer from TensorFlow API.

Solution 3 - Python

I also needed to convince myself so I ran a simple example with a 3×3 RGB image.

# red    # green        # blue
1 1 1    100 100 100    10000 10000 10000
1 1 1    100 100 100    10000 10000 10000    
1 1 1    100 100 100    10000 10000 10000

The filter is initialised to ones:

1 1
1 1

enter image description here

I have also set the convolution to have these properties:

  • no padding
  • strides = 1
  • relu activation function
  • bias initialised to 0

We would expect the (aggregated) output to be:

40404 40404
40404 40404

Also, from the picture above, the no. of parameters is

3 separate filters (one for each channel) × 4 weights + 1 (bias, not shown) = 13 parameters


Here's the code.

Import modules:

import numpy as np
from keras.layers import Input, Conv2D
from keras.models import Model

Create the red, green and blue channels:

red   = np.array([1]*9).reshape((3,3))
green = np.array([100]*9).reshape((3,3))
blue  = np.array([10000]*9).reshape((3,3))

Stack the channels to form an RGB image:

img = np.stack([red, green, blue], axis=-1)
img = np.expand_dims(img, axis=0)

Create a model that just does a Conv2D convolution:

inputs = Input((3,3,3))
conv = Conv2D(filters=1, 
              strides=1, 
              padding='valid', 
              activation='relu',
              kernel_size=2, 
              kernel_initializer='ones', 
              bias_initializer='zeros', )(inputs)
model = Model(inputs,conv)

Input the image in the model:

model.predict(img)
# array([[[[40404.],
#          [40404.]],

#         [[40404.],
#          [40404.]]]], dtype=float32)

Run a summary to get the number of params:

model.summary()

enter image description here

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionyokiView Question on Stackoverflow
Solution 1 - PythonnoioView Answer on Stackoverflow
Solution 2 - PythonAlaroffView Answer on Stackoverflow
Solution 3 - PythonremykaremView Answer on Stackoverflow