# Note for Neural Networks(3)

## Note for Neural Networks (3)

Implementing the gradient descent step $J(w, b) = \frac{1}{m}\sum^{m}_{z=0}J(W, b, x^{(z)}, y^{(z)})$ Vectorize the process:

As we need to go through the above operations, we are actually process the data by batches.

we need to sum up all the partial differential terms.

Finally, we update.

Conclude the whole algorithm

Next section will give a detailed explanation for the implementation.

Reference: