I have a constant input vector $I$ and a variable matrix $N$. (I know that matrixes aren’t usually $N$, but in this case, it stands for Neural Network)
I’m building a program that uses the gradient of some error function $E$ to change the values inside the matrix $N$, so that $E(NI)$ approaches a local (and hopefully global) minimum as the program continues.
However, the error function $E$ that I want is kinda weird. Given a vector $V$, $E(V)$ should compute the greatest difference between two numerically sequential values in the vector.
That is, if $V=(1,3,7)$, E(V)=4 because 7-3=4 and 3-1=2, and 4>2, and therefore 4 is the answer.
Now, if the formula were just
$$E(V) = \max_{x} V_{x+1} - V_{x}$$
where $V_x$ is the $x$th entry in the vector $V$, that might make this question easier. However, it is not, because V is not always numerically ordered.
That is, if $V=(3,1,7)$ or $V=(7,3,1)$, $E(V)$ should still be 4. As another example, $E((1,3,2,7))=4$ as well.
So, I have two questions that build on each other.
- What is a formula for $E$? (Or show that there cannot be such a formula using math notation)
- Is there a formula for $E(NI)$ which is differentiable with respect to entries in $N$, as a function of $N$ and $I$? (If not, why not? If so, what is it?)
(Note that since I’ve only defined $E(V)$ for a vector $V$, we can assume that $N$ has dimensions such that $NI$ results in a vector (that is, a two-dimensional matrix with one dimension being 1))