What's the difference between y(x) and a in the cost function from neuralnetworksanddeeplearning.com book

57 Views Asked by At

I'm reading Neural Networks and Deep Learning book of Michael Nielsen from http://neuralnetworksanddeeplearning.com/chap1.html#learning_with_gradient_descent

The definition of the cost function is:

\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2. \tag{6}\end{eqnarray}

Where:

  • y(x) is the output from the network for all training inputs x
  • a is the vector of outputs from the network when x is input

My problem is: I couldn't find the difference between y(x) and a. Could anyone please explain it to me?

Thanks,

1

There are 1 best solutions below

0
On

Here y(x) is the desired value that a network should give back when we feed it with input value of x data. So, (y,x) = (training dataset output, training dataset input)

And "a" is the actual output from the network(not the desired output).