Showing that an algorithm is a gradient descent method

298 Views Asked by At

I am stuck on a question about gradient descent, it asks to "show that that the perceptron learning algorithm is a gradient descent method for the squared error target function $(y - \sum w_ix_i)^2$". I understand that by going through the perceptron algorithm the squared error target function will be decreased from one step of the algorithm to the next, but in general, how do I show that an algorithm is a gradient descent method for something?

1

There are 1 best solutions below

0
On

Repeating littleOs's comment:

The gradient descent iteration for minimizing f is $$x \leftarrow x−t\nabla f(x)$$. You could write this out explicitly for your particular f and see if the iteration you get is the same as the perceptron algorithm

Note that, $t $ is the learning rate, and $\nabla f(x)$ is the derivative of the cost function. Now take the derivative of your cost function with respect to input and put in this iteration.