Are partial derivatives with respect to time valid in state-space formulation?

100 Views Asked by At

The canonical continuous-time state-space (linear) formulation is of the form: $$\frac{d\mathbf{x}}{dt} = A\mathbf{x} + B \mathbf{u} \; ,$$ with output: $$\mathbf{y} = C\mathbf{x} + D\mathbf{u},$$ where $\mathbf{x}$ is the state vector, $\mathbf{u}$ in the is the input vector, $\mathbf{y}$ is the output vector, and $A$, $B$, $C$, and $D$ are matrices.

The continuous-time state-space formulation also has a nonlinear form: $$\frac{d\mathbf{x}}{dt} = \mathbf{f}\left(\mathbf{x}(t),\mathbf{u}(t)\right)\;.$$

Source for previous two equations: Chapter 4, Signals, System, and Inference by Oppenheim and Verghese.

It seems that the left-hand side of the previous two equations is always expressed as the total derivative of the state vector, $\mathbf{x}$. However, I have seen examples of simulating PDEs where the left-hand side is taken to be the partial derivative of the states with respect to time (examples: (1) page 7 here; (2) not explicitly state space but doing time integration on partial derivative with respect to time, starting page 9 here).

My questions are:

  • What is the difference between showing (as above) state space as a total derivative with respect to the state and then implementing it with partial time derivatives?
  • Are the time domain integration methods for simulating the time evolution of the partial vs. total derivatives the same (such as forward Euler, trapezoidal, etc.)? Shouldn't the partial and total derivatives with respect to time of a state be different and as a result, shouldn't the time evolution of the state vector be different?
1

There are 1 best solutions below

2
On

As long as you only have one independent variable, there is no difference between partial and total derivative. So it would be unusual but not wrong to write ODE with partial derivative symbols.

You seem to want to change the state space from a finite dimensional vector space to a function space. This is at first quite simple, most of ODE theory transfers to Banach spaces. It becomes more complicated when you introduce differential operators on the function space to the right side. These are not bounded, not continuous, and thus not Lipschitz. This opens a whole new realm of complications for existence and uniqueness.

In numerical methods one usually discretizes the function space dimension, using finite differences, finite elements, ... Working with a fixed space discretization is called "method of lines". The finer this discretization, the higher the Lipschitz constant of the discretized differential operators become. Thus the tighter the restriction of the step size to get A-stability in explicit methods.