Question related to sigmoid function

48 Views Asked by At

I have been going through the Deep Learning course from http://neuralnetworksanddeeplearning.com/chap1.html

I have come across this approximation in the function $w_jx+b$

$\Delta output \approx \frac {\partial output}{\partial w_j} \Delta w_j + \frac {\partial output}{\partial b} \Delta b$

I am able to understand why the partial derivative is taken, because to findout the rate of change of the output wrt the bias and weight change

But why the $\Delta w_j$ and $\Delta b$ are being multiplied when approximation is being done as per calculus.

Can you please let me know what kind of approximation method is being used here for obtaining the above equation?

Thanks in advance