Linearizing equations of motion

3.5k Views Asked by At

I have been looking at the operations of a quadrotor drone. I am reading the maths behind it and in one section it mentions:

"These equations of motion are linearized with respect to an equilibrium point" . I have some questions regarding this.

1) What exactly does it mean to linearize an equation of motion ? how exactly is this usually done?

2) If say I linearize the equations of motion for a drone or car or whatever, why is this useful, what information can it tell me?

Thank you

Here is a sort of idea of it, I cant post them all since it is a 70 page document (Don't think I can attach documents) but it mostly relates to the translational and altitude dynamics.

2

There are 2 best solutions below

0
On BEST ANSWER

A time-independent equation of motion can be written as $$ {d^2x\over dt^2}=F(x), $$ where $x$ is a vector representing the various variables involved. If function $F$ is sufficiently regular it can be expanded about an equilibrium point $x_0$ as $$ F(x)=F(x_o)+F'(x_o)\,(x-x_0) + \dots $$ But $F(x_0)=0$ by definition of equilibrium point, hence we can approximate the equation of motion with its linearised version: $$ {d^2x\over dt^2}=F'(x_o)\,(x-x_0). $$ This is useful because the linearised equation is much simpler to solve and it will give a good approximation if $\|x-x_0\|$ is small enough.

0
On

Control is the question of designing an input[s] $u(t)$ for a system to bring it state to a desired point. In your application, you are finding a set of input forces $u(t)$ for a general nonlinear system

$$\dot{x}(t) = f(x(t), u(t))\quad (1)$$

so that the solution $x(t)$ tends towards a desired setpoint $x_0.$ This would be the desired configuration (position and orientation) of the quadcopter. In your case, $f(x,u)$ would be the general equations of motion (Newton-Euler equations).

However, it turns out it is much easier to control linear differential systems that take the form

$$\dot{x}(t) = A\,x(t) + B\,u(t)\quad (2)$$

where $x$ is a vector and $A, B$ are constant, real matrices than it is to control a general nonlinear system (1). So the first question we ask is

Can we approximate equation (1) with (2) for some constant $A, B$ under a reasonable assumption?

The answer is yes if there exists a constant applied input $u_0$ so that the desired setpoint $x_0$ is an equilibrium point, i.e. when $f(x_0, u_0) = 0.$ You can think of $u_0$ as the applied force needed to keep a quadcopter at the desired configuration once it actually gets there. For example, you must maintain a non-zero applied thrust if you want to keep the quadcopter at a positive altitude to compensate gravity.

Once you have computed $u_0$ to keep your quadcopter in a desired setpoint $x_0,$ you can linearize the dynamics by evaluating the jacobian (first derivative matrix) of $f$ at the equilibrium point $(x_0, u_0).$ In particular you would define

$$A := \left.\frac{d f}{d x} \right|_{x=x_0, u=u_0}, \quad B := \left.\frac{d f}{d u} \right|_{x=x_0, u=u_0}$$

and then define the perturbations from the equilibrium point

$$\delta x := x - x_0, \quad \delta u := u - u_0,$$

and it will turn out that the dynamics of $\delta x$ are approximated by

$$\delta \dot{x} = A\,\delta x + B\,\delta u.$$

At this point, a control engineer can do some math (linear control theory) to answer the question

Does there exist a rule for $\delta u$ that ensures $\delta x \to 0$ ?

There are many many many textbooks on how to do this. For example, you might implement state feedback $\delta u = -K \delta x$ and choose $K$ cleverly so that the resulting dynamics

$$\delta\dot{x} = (A - B K)\, \delta x$$

satisfy the property. Upon doing so, one can solve for $u(t)$ using the definition of $\delta u$ and the resulting control law will work, for states close enough to the desired configuration, to bring the configuration $x$ towards $x_0.$ Improving the response of the system can be done by tuning the feedback law designed for $\delta u$ however linearization constrains your performance guarantees to a small region near the equilibrium point.

A good start to all of this would be a textbook on Linear Control Systems. Some cover a bit of modelling and linearization theory. I believe "Modern Control Systems" by Dorf & Bishop does but a few others do as well.