Note for Neural Networks(3)

Note for Neural Networks(3)

Note for Neural Networks (3)

Implementing the gradient descent step \(J(w, b) = \frac{1}{m}\sum^{m}_{z=0}J(W, b, x^{(z)}, y^{(z)})\) Vectorize the process:

1570275593890

As we need to go through the above operations, we are actually process the data by batches.

we need to sum up all the partial differential terms.

1570275828806

Finally, we update.

1570275859073

Conclude the whole algorithm

1570276079943

Next section will give a detailed explanation for the implementation.

Reference:

[1]. https://adventuresinmachinelearning.com/neural-networks-tutorial/