This content is about finding approximate solution of ode using artificial neural networks.
This content is from paper 'Artificial Neural Network for Solving Ordinary and Partial Differential Equation' by Lagaris, Likas and Fotiadis.
Let us consider a first-order ODE as follows:
$y'(x)=f(x,y(x)), y(a)=A$, $x \in [a,b]$
In this case, the ANN trial solution is written as
$y_{t}(x)=A+(x-a)N(x,\overrightarrow{p})$ where $N(x, \overrightarrow{p})$ is the output of feed-forward network with one input unit for $x$ and weights $\overrightarrow{p}$. We can see that $y_{t}(x)$ satisfies initial condition by construction.
Our goal is to minimise this error function
$E[\overrightarrow{p}]=\sum_{i}\left\{ \frac{dy_{t}(x_{i})}{dx}-f(x_{i},y_{t}(x_{i}))\right\} ^{2}$
where $x_{i}$ are points in $[a,b]$ and $y_{t}$ denotes the trial solution. The derivative of $y_{t}$ can be written as:
$\frac{dy_{t}(x)}{dx}=N(x,\overrightarrow{p})+(x-a)\frac{dN(x,\overrightarrow{p})}{dx}$
My question is at the point where they say
$\frac{d(y_t)}{d(x)} = N(x,p) + (x-a)(\frac{dN(x,p)}{dx})$.
Here shouldn't we get
$\frac{\partial t}{\partial x} = N(x,p) + (x-a)\frac{\partial (N(x,p))}{\partial x}$?
Since both $y_t$ and $N(x,p)$ are functions of $x$ & $p$(weights)? I'm saying $y_t$ as a function of $x$ and $p$ since $y_t$ consists of $N(x,p)$ which is a function of $x$ and $p$. Please give your suggestions on this.
Thanks in advance.