Okay, so earlier I posted this question "dropping a particle into a vector field " as sort of a feeler question as i study line integrals in order to go into surface integrals and eventually differential forms and differential geometry and I got to thinking about my question's example and how to solve it. I've taken an advanced linear algebra course and the required ODE course, but the ODE course never made it to solving systems of differential equations. So my posited vector field in which I'm dropping a particle into is $$\mathbf{F}(x,y)=y\mathbf{i}-x\mathbf{j}$$ An answer came in the form that in order to solve the question I posited, I needed to find an $\mathbf{r}$ such that $\mathbf{r}'(t)=\mathbf{F}(\mathbf{r}(t))$. So my arbitrary $\mathbf{r}(t)=(\mathbf{x}(t),\mathbf{y}(t))$, if i take the derivative of this since it's just a vector valued function i get $$\mathbf{r}'(t)=(\mathbf{x}'(t),\mathbf{y}'(t))$$ Now, $$\mathbf{F}(\mathbf{r}(t))=\mathbf{F}(\mathbf{x}(t),\mathbf{y}(t))=\mathbf{y}(t)\mathbf{i}-\mathbf{x}(t)\mathbf{j}=(\mathbf{y}(t),-\mathbf{x}(t))$$ So my equation has become $$\mathbf{r}'(t)=(\mathbf{x}'(t),\mathbf{y}'(t))=(\mathbf{y}(t),-\mathbf{x}(t))$$ This gives me a system of differential equations to solve: $$\mathbf{x}'(t)=\mathbf{y}(t)$$ $$\mathbf{y}'(t)=-\mathbf{x}(t)$$ And here I'm not sure how to procede, or if this is even the correct intution. The person that wrote the answer on the last post gave me numeric Euler method as a general case, but this seems pretty solvable and straight forward, but I'm not seeing where to move.
2026-04-05 07:50:11.1775375411
dropping a particle into a vector field, part 2
345 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in ORDINARY-DIFFERENTIAL-EQUATIONS
- The Runge-Kutta method for a system of equations
- Analytical solution of a nonlinear ordinary differential equation
- Stability of system of ordinary nonlinear differential equations
- Maximal interval of existence of the IVP
- Power series solution of $y''+e^xy' - y=0$
- Change of variables in a differential equation
- Dimension of solution space of homogeneous differential equation, proof
- Solve the initial value problem $x^2y'+y(x-y)=0$
- Stability of system of parameters $\kappa, \lambda$ when there is a zero eigenvalue
- Derive an equation with Faraday's law
Related Questions in MULTIVARIABLE-CALCULUS
- Equality of Mixed Partial Derivatives - Simple proof is Confusing
- $\iint_{S} F.\eta dA$ where $F = [3x^2 , y^2 , 0]$ and $S : r(u,v) = [u,v,2u+3v]$
- Proving the differentiability of the following function of two variables
- optimization with strict inequality of variables
- How to find the unit tangent vector of a curve in R^3
- Prove all tangent plane to the cone $x^2+y^2=z^2$ goes through the origin
- Holding intermediate variables constant in partial derivative chain rule
- Find the directional derivative in the point $p$ in the direction $\vec{pp'}$
- Check if $\phi$ is convex
- Define in which points function is continuous
Related Questions in VECTOR-FIELDS
- Does curl vector influence the final destination of a particle?
- Using the calculus of one forms prove this identity
- In a directional slope field, how can a straight line be a solution to a differential equation?
- Partial Differential Equation using theory of manifolds
- If $\nabla X=h \cdot \text{Id}_{TM}$ for a vector field $X$ and $h \in C^{\infty}(M)$, is $h$ constant?
- Equivalent definition of vector field over $S^2$
- Study of a " flow "
- Extension of a gradient field
- how to sketch the field lines of $F(x,y)=(\sin y,-\sin x)$?
- Is a vector field a mathematical field?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
It's pretty easy to see the "little particle" is going in circles; we don't need to solve the differential equation or directly introduce sines and cosines; just look at
$\mathbf r^2(t) = \mathbf r(t) \cdot \mathbf r(t) = \mathbf x^2(t) + \mathbf y^2(t); \tag{1}$
if we differentiate this with respect to $t$ we obtain
$\frac{d}{dt}(\mathbf r^2(t)) = 2\mathbf x(t) \dot{\mathbf x}(t) + 2\mathbf y(t) \dot{\mathbf y}(t), \tag{2}$
and if we now use our differential equation
$\dot{\mathbf x}(t) = \mathbf y(t), \tag{3}$
$\dot{\mathbf y}(t) = -\mathbf x(t), \tag{4}$
substituting (3) and (4) into (2) we find that
$\frac{d}{dt}(\mathbf r^2(t)) = 2\mathbf x(t) \mathbf y(t) - 2\mathbf y(t) \mathbf x(t) = 0. \tag{5}$
(5) shows that $\Vert \mathbf r(t) \Vert^2 = \mathbf r^2(t)$ is constant, hence $\Vert \mathbf r(t) \Vert$ is constant; the little particle moves on a circle of radius $\Vert \mathbf r(t_0) \Vert$, where $t_0$ is some initial moment in time. We can do better, still without solving a differential equation: note the vector $\dot{\mathbf r}(t) = (\mathbf y(t), -\mathbf x(t))^T$ is in fact orthogonal to $\mathbf r(t) = (\mathbf x(t), \mathbf y(t))$, so it is tangent to the circle on which the particle is constrained to move, which is as we have seen is of constant radius $\Vert \mathbf r(t_0) \Vert$ about the origin $(0, 0)^T$. In fact it is easy to see that the vector $\dot{\mathbf r}(t) = (\mathbf y(t), -\mathbf x(t))^T$ points in a clockwise direction; furthermore the speed of the particle, that is, the rate at which it traverses distance along the circle on which it moves, is clearly given by
$\Vert \dot{\mathbf r}(t) \Vert = \sqrt{\mathbf y^2(t) + \mathbf x^2(t)} = \Vert \mathbf r(t) \Vert = \Vert \mathbf r(t_0) \Vert; \tag{6}$
notice that the speed is of constant magnitude; in fact, this magnitude is just the size of the radius of the circle! And since the circumference of a circle of radius $\Vert \mathbf r(t_0) \Vert$ is given by $2\pi \Vert \mathbf r(t_0) \Vert$, it follows that the entire circle is traversed exactly once every $2\pi$ seconds (and here I'm assuming we are measuring time in seconds); and since a circle, in terms of angular measure, is precisely $2\pi$ radians, we see that the angular velocity of the particle about the point $(0, 0)^T$ is a constant one radian per second in magnitude. So if $\theta$ is the central or polar angle in our coordinate system, then we must have
$\dot \theta = -1, \tag{7}$
or
$\theta (t) =\int_{t_0}^t (-1)ds = t_0 -t + \theta_0, \tag{8}$
where we choose $\theta$ increasing counter-clockwise, and $\theta_0 = \theta(t_0)$. At this point it is convenient to introduce sines and cosines. Since $(\mathbf x(t), \mathbf y(t))^T$ lies on the cirle of radius $\mathbf r(t_0)$, we may write
$\mathbf x(t) = \Vert \mathbf r(t_0) \Vert \cos \theta(t) = \Vert \mathbf r(t_0) \Vert \cos (t_0 -t + \theta_0), \tag{9}$
$\mathbf y(t) = \Vert \mathbf r(t_0) \Vert \sin \theta(t) = \Vert \mathbf r(t_0) \Vert \sin (t_0 -t + \theta_0), \tag{10}$
which apparently gives a complete solution to the equation(s) (3)-(4) with initial conditions
$\mathbf x(t_0) = \Vert \mathbf r(t_0) \Vert \cos \theta_0, \tag{11}$
$\mathbf y(t_0) = \Vert \mathbf r(t_0) \Vert \sin \theta_0. \tag{12}$
The preceding analysis provides an example of how a differential equation, or system of differential equations, may be analyzed and sometimes even solved without actually "solving" it, if you take my meaning: sometimes there are ways to "get at" the solution without resorting to formal quadrature or related, systematic techniques. For example, here we used a geometrical analysis of the vector field $(-\mathbf y, \mathbf x)^T$. Methods related to that exploited here were used in my answers to this question and this one.
Of course using (3), (4) to derive $\ddot {\mathbf x} = \dot{\mathbf y}$, $\ddot {\mathbf y} = - \dot {\mathbf x}$ and from there move to $\ddot{\mathbf x} + \mathbf x = 0$, $\ddot{\mathbf y} + \mathbf y =0$, as suggested by Adam Salz in his comment, and from there to solutions which are of the form $C_1\sin (t + \theta_0)$, $C_2\cos(t + \theta_0)$, or linear combinations thereof, with $C_1, C_2$ constants, is part of a generally accepted and systematic method which forms an essential part of any ODE solver's toolkit. And on a related note, the theory of matrix exponentials, addressed by automaton in his/her comment, is another facet of the jewel which itself forms an indispensable tewel ($\equiv \text {tool}$ ;)!); in the present case this approach is particularly simple; we simply write (3), (4) as
$\dot {\mathbf r}(t) = \begin{pmatrix} \dot{\mathbf x}(t) \\ \dot{\mathbf y}(t) \end{pmatrix} = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{pmatrix} \mathbf x(t) \\ \mathbf y(t) \end{pmatrix} = J \mathbf r(t), \tag{13}$
where
$J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \tag{14}$
which satisfies
$J^2 = - I. \tag{15}$
Now it is a fact which our OP Christopher Ernst will soon encounter, if he has not done so already, that for any constant matrix $A$ we can define the matrix $e^{At}$, just as we can define $e^{\alpha t}$ for scalars $\alpha$, and that we have
$\frac{d}{dt}e^{A(t - t_0)} = Ae^{A(t - t_0)}, \tag{16}$
just as
$\frac{d}{dt}e^{\alpha(t - t_0)} = \alpha e^{\alpha (t - t_0)}, \tag{17}$
so that just as
$\mathbf x(t) = e^{\alpha (t - t_0)} \mathbf x(t_0) \tag{18}$
solves
$\dot {\mathbf x}(t) = \alpha \mathbf x(t) \tag{19}$
with initial condition $x(t_0)$ at $t = t_0$, so we have
$\mathbf r (t) = e^{A(t - t_0)} \mathbf r(t_0) \tag{20}$
solves
$\dot {\mathbf r}(t) = A\mathbf r(t) \tag{21}$
with initial condition $\mathbf r(t_0)$ at $t = t_0$. Applying these ideas to our equation (13), we find that
$\mathbf r(t) = e^{J(t - t_0)} \mathbf r(t_0), \tag{22}$
and in the present case the matrix $e^{J(t - t_0)}$ is particularly easy to evaluate. From (15), it follows that the algebra involved in computing $e^{J(t - t_0)}$ precisely parallels that of calculating $e^{i(t - t_0)}$ where $i^2 = -1$ is the standard complex scalar $\sqrt{-1}$. Thus in fact just as
$e^{i(t - t_0)} = \cos(t - t_0) + i \sin(t - t_0), \tag{23}$
so
$e^{J(t - t_0)} = \cos(t - t_0) + J \sin(t - t_0), \tag{24}$
which can easily be seen via a term-by-term comparison of the power series for $e^{i(t - t_0)}$ and $e^{J(t - t_0)}$; I'll leave the relatively simple issues of convergence of these series to my readers; they are not difficult. From (24) combined with (14):
$e^{J(t - t_0)} = \begin{bmatrix} \cos(t - t_0) & \sin(t - t_0) \\ -\sin(t - t_0) & \cos(t - t_0) \end{bmatrix}, \tag{25}$
and so from (22) and (11), (12) we obtain
$\mathbf x(t) = \Vert \mathbf r(t_0) \Vert (\cos \theta_0 \cos (t - t_0) + \sin \theta_0 \sin(t - t_0)) \tag{26}$
and
$\mathbf y(t) = \Vert \mathbf r(t_0) \Vert (-\cos \theta_0 \sin (t - t_0) + \sin \theta_0 \cos(t - t_0)), \tag{27}$
which after a little trigono-algebraic manipulation are seen to agree with (9) and (10). So be some of the more conventional, systematic procedures for solving (3), (4), (13) and their kindred equations.
There are of course numerical methods, such as Euler's, which can be used to calculate solutions to these equations and much more complex, general systems; but for problems like the one put forth in this question, which have relatively easily derived, closed-form solutions, use of such numerical approximation schemes is of questionable value. But such schemes can be used to find the solutions to systems which are otherwise intractable; their study is a vast and complex subject in its own right. So with these words I say, adieu, 'till we meet again!
Hope this helps. Cheers, and as always
Fiat Lux!!!