How to update the bias in neural network backpropagation?

Machine LearningMathNeural Network

Machine Learning Problem Overview


Could someone please explain to me how to update the bias throughout backpropagation?

I've read quite a few books, but can't find bias updating!

I understand that bias is an extra input of 1 with a weight attached to it (for each neuron). There must be a formula.

Machine Learning Solutions


Solution 1 - Machine Learning

Following the notation of Rojas 1996, chapter 7, backpropagation computes partial derivatives of the error function E (aka cost, aka loss)

∂E/∂w[i,j] = delta[j] * o[i]

where w[i,j] is the weight of the connection between neurons i and j, j being one layer higher in the network than i, and o[i] is the output (activation) of i (in the case of the "input layer", that's just the value of feature i in the training sample under consideration). How to determine delta is given in any textbook and depends on the activation function, so I won't repeat it here.

These values can then be used in weight updates, e.g.

// update rule for vanilla online gradient descent
w[i,j] -= gamma * o[i] * delta[j]

where gamma is the learning rate.

The rule for bias weights is very similar, except that there's no input from a previous layer. Instead, bias is (conceptually) caused by input from a neuron with a fixed activation of 1. So, the update rule for bias weights is

bias[j] -= gamma_bias * 1 * delta[j]

where bias[j] is the weight of the bias on neuron j, the multiplication with 1 can obviously be omitted, and gamma_bias may be set to gamma or to a different value. If I recall correctly, lower values are preferred, though I'm not sure about the theoretical justification of that.

Solution 2 - Machine Learning

The amount you change each individual weight and bias will be the partial derivative of your cost function in relation to each individual weight and each individual bias.

∂C/∂(index of bias in network)

Since your cost function probably doesn't explicitly depend on individual weights and values (Cost might equal (network output - expected output)^2, for example), you'll need to relate the partial derivatives of each weight and bias to something you know, i.e. the activation values (outputs) of neurons. Here's a great guide to doing this:

https://medium.com/@erikhallstrm/backpropagation-from-the-beginning-77356edf427d

This guide states how to do these things clearly, but can sometimes be lacking on explanation. I found it very helpful to read chapters 1 and 2 of this book as I read the guide linked above:

http://neuralnetworksanddeeplearning.com/chap1.html (provides essential background for the answer to your question)

http://neuralnetworksanddeeplearning.com/chap2.html (answers your question)

Basically, biases are updated in the same way that weights are updated: a change is determined based on the gradient of the cost function at a multi-dimensional point.

Think of the problem your network is trying to solve as being a landscape of multi-dimensional hills and valleys (gradients). This landscape is a graphical representation of how your cost changes with changing weights and biases. The goal of a neural network is to reach the lowest point in this landscape, thereby finding the smallest cost and minimizing error. If you imagine your network as a traveler trying to reach the bottom of these gradients (i.e. Gradient Descent), then the amount you will change each weight (and bias) by is related to the the slope of the incline (gradient of the function) that the traveler is currently climbing down. The exact location of the traveler is given by a multi-dimensional coordinate point (weight1, weight2, weight3, ... weight_n), where the bias can be thought of as another kind of weight. Thinking of the weights/biases of a network as the variables for the network's cost function make it clear that ∂C/∂(index of bias in network) must be used.

Solution 3 - Machine Learning

I understand that the function of bias is to make level adjust of the input values. Below is what happens inside the neuron. The activation function of course will make the final output, but it is left out for clarity.

  • O = W1 I1 + W2 I2 + W3 I3

In real neuron something happens already at synapses, the input data is level adjusted with average of samples and scaled with deviation of samples. Thus the input data is normalized and with equal weights they will make the same effect. The normalized In is calculated from raw data in (n is the index).

  • Bn = average(in); Sn = 1/stdev((in); In= (in+Bn)Sn

However this is not necessary to be performed separately, because the neuron weights and bias can do the same function. When you subsitute In with the in, you get new formula

  • O = w1 i1 + w2 i2 + w3 i3+ wbs

The last wbs is the bias and new weights wn as well

  • wbs = W1 B1 S1 + W2 B2 S2 + W3 B3 S3
  • wn =W1 (in+Bn) Sn

So there exists a bias and it will/should be adjusted automagically with the backpropagation

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionKatyaView Question on Stackoverflow
Solution 1 - Machine LearningFred FooView Answer on Stackoverflow
Solution 2 - Machine LearningH FroedgeView Answer on Stackoverflow
Solution 3 - Machine LearningmarlummeView Answer on Stackoverflow