This commit is contained in:
@@ -1,5 +1,8 @@
|
||||
[!]
|
||||
[=post-]
|
||||
<p>
|
||||
<i>This post is part of a series; the next post in the series can be found <a href="[^baseurl]/posts/differential-review-3.html">here</a>.</i>
|
||||
</p>
|
||||
<p>
|
||||
Hello, everyone! It's only been a few days since I last screamed into the void here, but it feels like a proper eternity. Hard to believe we're only 2 weeks in!
|
||||
The quiz Thursday went well! Nobody else wore funny hats, but I met someone who's been reading Deadly Boring Math (a fellow physics major, no less!), which was pretty cool.
|
||||
|
||||
158
site/posts/differential-review-3.html
Normal file
158
site/posts/differential-review-3.html
Normal file
@@ -0,0 +1,158 @@
|
||||
[!]
|
||||
[=post-]
|
||||
<p>
|
||||
Hello once again, dear readers! It's been yet another big week in summer diffy q (although thankfully a little less big than the preceding - Memorial Day cut out a
|
||||
<i>whole lecture</i>). We've been primarily concerned with solving SLDEs (Systems of Linear Differential Equations), and we've a whole bunch of ways to do that.
|
||||
Spoiler alert: it's going to require Euler's formula `e^{i theta} = cos(theta) + isin(theta)`, a fundamental identity you'll have to memorize if you don't want to get a
|
||||
tattoo. We also have the second quiz on Tuesday next week - I'll get a review out tomorrow or Monday. It shouldn't be too hard; just make sure you're familiar
|
||||
with graphing techniques.
|
||||
</p>
|
||||
<h2>Briefly: Systems of Linear Differential Equations</h2>
|
||||
<p>
|
||||
SLDEs are a fairly simple idea at face. Rather like a system of linear equations, we have several (arbitrarily many!) linearly independent <i>differential</i> equations in terms of
|
||||
some variables (`x`, `y`, `z`, `f`, it doesn't matter) in terms of the same variable `t`. The ones we've dealt with so far are mostly two-variable, so they look
|
||||
rather like `frac {dx} {dt} = 3x - y, frac {dy} {dt} = x + y`.
|
||||
</p>
|
||||
<p>
|
||||
This can be written very easily in matrix form. To understand how, consider that we can write the above as `\[frac {dx} {dt}, frac {dy} {dt}] = \[\[3, -1], \[1, 1]] \[x, y]`
|
||||
(not sure how I did this? You should probably review some linear algebra - specifically matrix-vector multiplication). Now, if we say that `X = \[x, y]`, and thus
|
||||
`frac {dX} {dt} = \[frac {dx} {dt}, frac {dy} {dt}]`, this can be rewritten as `X'(t) = \[\[3, -1], \[1, 1]] X`. A matrix equation! Solving this is more complicated
|
||||
than a typical linear algebra problem. We'll get to that in a bit.
|
||||
</p>
|
||||
<p>
|
||||
What if our linear system has constants, like `frac {dx} {dt} = 3x - y + 5, frac {dy} {dt} = x + y - pi`? This is quite simple: we just add a constant vector.
|
||||
This equation becomes `X'(t) = \[\[3, -1], \[1, 1]] X + \[5, -pi]`. In this case, the SLDE is no longer <i>homogeneous</i> - the constants aren't 0, so this is harder to solve.
|
||||
</p>
|
||||
<p>
|
||||
SLDEs also have a concept of <i>autonomousity</i> - it just means `t` only appears as something over `dt`. It's very possible to write an SLDE that isn't autonomous:
|
||||
`frac {dx} {dt} = x + 3ty, frac {dy} {dt} = tx`. The matrix equation form here is obviously `X'(t) = \[\[1, 3t], \[t, 0]] X`. Homogeneous, but nonautonomous.
|
||||
</p>
|
||||
<h2>Briefly: Rewriting Second Degree Differential Equations as SLDEs</h2>
|
||||
<p>
|
||||
Solving SLDEs is fairly simple and procedural - you don't need to think too much about it, just Follow The Process, which I promise I'll get to soon. Solving
|
||||
higher-degree differential equations is decidedly <i>not</i>. Fortunately, we have a neat substitution trick to rewrite some second-degree ODEs as an SLDE.
|
||||
This one is much easier to explain by example. Given a second-degree linear ODE `u'' - 7u' + tu = 3`, we,
|
||||
</p>
|
||||
<ol>
|
||||
<li>Define the substitutions: `x = u`, `y = u'`</li>
|
||||
<li>Define their derivatives: `x' = u'`, and `y' = u''`. Can you see where this is going?</li>
|
||||
<li>Define a relation between them: `x' = y`</li>
|
||||
<li>Substitute them into the problem: `y' - 7y + tx = 3`. Note that I picked `y` instead of `x'` where either would work.</li>
|
||||
<li>Solve and recombine: `y' = 7y - tx + 3, x' = y`</li>
|
||||
<li>Write as a matrix where `X = \[x, y]`: `X' = \[\[0, 1], \[-t, 7]] \[x, y] + \[0, 3]`. This is nonhomogeneous (note the nonzero constant) and nonautonomous (note the `t`).</li>
|
||||
</ol>
|
||||
<p>
|
||||
Ta-da! Note that this is fairly delicate: for instance, if we leave out that `x' = y`, we're not going to go anywhere useful, and
|
||||
if we used `7x'` instead of `7y`, we'd have ended up unable to turn this into a matrix. To get a proper and useful SLDE, you'll often need
|
||||
to think seriously about what choices for substitutions and eliminations you make.
|
||||
</p>
|
||||
<h2>Homogeneous Autonomous SLDEs with Different Real Eigenvalues</h2>
|
||||
<p>
|
||||
That section title chills me to the bone. The simplest way to solve a homogeneous and autonomous SLDE is with the eigenvalues and eigenvectors:
|
||||
for a 2x2 problem like the ones we've dealt with so far in this class, if we can isolate real eigenvalues `lambda_1` and `lambda_2` with corresponding eigenvectors
|
||||
`v_1` and `v_2`. The astute linear algebra nerd will note that there are infinitely many eigenvectors for a given eigenvalue - we're going to have
|
||||
some arbitrary constant multiples, so it doesn't matter <i>where</i> in the eigenbasis you pick your vectors. I would recommend you prioritize small whole numbers.
|
||||
The general solution in terms of arbitrary constants `c_1` and `c_2` is `c_1 e^{lambda_1 t} v_1 + c_2 e^{lambda_2 t} v_2`. I'm not going to go into the proof of this;
|
||||
it's elementary enough that we can just accept it to be True.
|
||||
</p>
|
||||
<p>
|
||||
Note that this is only true in the case that the matrix has two real and distinct eigenvalues. Matrices with repeated eigenvalues, or complex eigenvalues, require a bit more work.
|
||||
Let's do an example. Take the system, `X'(t) = \[\[5, -1], \[0, 1]] X`: I'll save you the linear algebra; if you aren't sure how I got these values, make
|
||||
sure to work through the problem yourself! We have eigenvalues `lambda_1 = 1` and `lambda_2 = 5`, and corresponding eigenvectors
|
||||
`v_1 = \[1, 4]` and `v_2 = \[1, 0]`. We can actually directly substitute to get `X(t) = c_1 e^t \[1, 4] + c_2 e^{5t} \[1, 0]`. Pretty easy!
|
||||
</p>
|
||||
<p>
|
||||
How do we pick `c_1` and `c_2`? The simple answer is that we don't: every possible `(c_1, c_2)` is a different solution. If we have an IVP, we can solve as always,
|
||||
but for now it's safe to just leave them alone.
|
||||
</p>
|
||||
<p>
|
||||
As much as I loathe graphing, the graph behavior here is actually quite important to talk about. Rather like a phase line in 1d, we can construct a
|
||||
<i>phase portrait</i> in 2d - one axis per dependent variable. Essentially, we plot a bunch of the curves that our solution will follow, and mark
|
||||
the direction of the derivative along each curve with arrows at some key locations, just like with phase lines. To pick these curves, we just grab
|
||||
some random values for `(c_1, c_2)` - for instance, `(1, 0)`, `(0, 1)`, etc.
|
||||
</p>
|
||||
<p>
|
||||
In the homogeneous case, every curve either converges to or diverges away from (or some combination thereof) the origin. We can actually make predictions about this without graphing it!
|
||||
For instance, we know that if the eigenvalues are both positive, the curves will always flow away, because the `e^{ lambda t }` terms will always increase the solution
|
||||
away from the origin. This is unstable, or a <i>nodal source</i>. Similarly, if both eigenvalues are negative, the `e^{ lambda t }` terms will approach 0,
|
||||
driving every solution towards the origin - thus, the origin is asymptotically stable, or a <i>nodal sink</i>. If one eigenvector is negative and the other is
|
||||
positive, this is a <i>saddle point</i>: you might remember that term from multivariable; it means solution curves flow towards it along one axis, and away from it
|
||||
along another. In our case, both eigenvalues are positive and the system is homogeneous, so the origin is unstable - a nodal source.
|
||||
</p>
|
||||
<p>
|
||||
When hand-drawing phase portraits, as with pretty much any manual graphing in calculus, your goal is to convey properties of the system rather than to actually
|
||||
provide precise values. I'm not going to do any graphics for these because, frankly, I don't want to, but I recommend you try drawing the phase portrait for the problem
|
||||
above. Hint: for very small `t`, solution curves follow `v_1` much more than `v_2`, but as `t` passes `1`, `v_2` quickly takes over. It's useful to note that each eigenbasis
|
||||
is a line in `R^2`, words I never thought I'd utter again after linear algebra, so these are the first things we graph when building a phase portrait of this system.
|
||||
</p>
|
||||
<h2>Briefly: Zero Eigenvalues</h2>
|
||||
<p>
|
||||
There are many possible <i>singular</i> matrices where one eigenvalue is 0. Fortunately, these behave exactly as you'd expect: the `e^{lambda t}` term for `lambda = 0` simplifies to
|
||||
just `1` - which means the eigenvector component for `lambda` is a constant solution.
|
||||
</p>
|
||||
<h2>Complexity!</h2>
|
||||
<p>
|
||||
Real eigenvalues are really just a special case of complex eigenvalues. How can we handle the complex case? For example, if we have the equation
|
||||
`X' = \[\[-1, -1], \[2, 1]] X`, our eigenvalues are `lambda_1 = -i` and `lambda_2 = i`, with corresponding eigenvectors `v_1 = \[1, -1 - i]` and `v_2 = \[1, -1 + i]`
|
||||
(remember that you only need to find the first one - the second eigenvalue is the complex conjugate of the first, as is the second eigenvector).
|
||||
We only need one eigenvalue/eigenvector pair, for reasons I'll explain in a bit, so I'll pick `lambda_2` and `v_2`. Substituting into the formula gives
|
||||
`X = c e^{it} \[1, i - 1]`. Yeck.
|
||||
</p>
|
||||
<p>
|
||||
At this point, the clever reader will have looked down at the formula carved into their hand and chuckled slightly. `e^{it}` is pretty close to `e^{i theta}` -
|
||||
<i>we know how to simplify this!</i> Euler's identity, `e^{i theta} = cos(theta) + i sin(theta)` (this formula won't be provided on quizzes/tests) substitutes
|
||||
to give us `X = c \[cos(t) + isin(t), icos(t) - sin(t) - cos(t) - isin(t)]`. That's... not much better. We have a few more tricks in this bag, though.
|
||||
The next step is to separate into something that is a multiple of `i` and something that isn't: `X = c \[cos(t), - sin(t) - cos(t)] + ci\[sin(t), cos(t) - sin(t)]`
|
||||
</p>
|
||||
<p>
|
||||
There's still a pesky `i` term here. Do you see how we can get rid of it? We're in the very lucky position of having an arbitrary constant, and we can do some
|
||||
<i>weird</i> stuff with it: for instance, we can split it into two arbitrary constants, one of which is a multiple of `i`. `X = c_1 \[cos(t), - sin(t) - cos(t)] + c_2 \[sin(t), cos(t) - sin(t)]`.
|
||||
This is actually a general solution! The Wronskian (determinant) of those two vectors is nonzero (you can calculate this yourself, if you want to, but I'm not going to), meaning
|
||||
they are linearly independent - and because this is a problem in `R^2`, <i>any</i> set of two linearly independent solutions is a complete basis.
|
||||
We <i>could</i> compute the solution for the first eigenvalue, but that would be unpleasant and unnecessary: it's not possible to have <i>more than</i> two
|
||||
linearly independent solutions here, so we'd end up just doing a bunch of painful algebra to get an equivalent result.
|
||||
</p>
|
||||
<p>
|
||||
What if the real part isn't zero? As it turns out, this isn't really all that much harder. The multiple of your eigenvector can be written as `Ce^{alpha t + i beta t}` -
|
||||
you can't use Euler's identity for this, but you can separate it out to `Ce^{alpha t}e^{i beta t}`, which <i>can</i> be Euler'd to get `Ce^{ alpha t } (cos(beta t) + i sin(beta t))`.
|
||||
The `e^{ alpha t }` will ride along in the algebra.
|
||||
</p>
|
||||
<p>
|
||||
Why not just leave the complex in rather than going to all this effort? This is mainly for "I-told-you-so" reasons; separating and solving makes a nicer result. There is, however, one
|
||||
added benefit: the equation with `i` is hellish to graph, but with `i` magicked out, it becomes much simpler. The graphs with complex eigenvalues
|
||||
tend to be spirals. If the real part is 0, you get an oval or a circle, and the origin is considered to be a <i>stable</i> or <i>center</i> node - very different from being
|
||||
<i>asymptotically stable</i>, because the solution curves don't actually approach it! If the real part is positive, you get an unstable <i>spiral source</i> - the solution curves
|
||||
all flow away from origin; and if the real part is negative, you get an asymptotically stable <i>spiral sink</i>.
|
||||
</p>
|
||||
<h2>Briefly: Repetition</h2>
|
||||
<p>
|
||||
There exist situations where an eigenvalue is repeated. In these cases, you may still be able to find two linearly independent eigenvectors, in which case you just solve as
|
||||
usual; situations where you only have one eigenvalue and one eigenvector are where we have to significantly diverge from the usual process.
|
||||
When that happens, we actually throw away the matrices entirely and use a combination of the variable-separable method and normal linear ODEs.
|
||||
</p>
|
||||
<p>
|
||||
Let's do an example. Given the equation `X' = \[\[-1, 2], \[0, -1]] X`, the only eigenvalue is `-1`, and the only eigenvector is `\[1, 0]`. We can't
|
||||
solve this the conventional way. The easiest way to solve this is actually to break it apart: we have two equations, `frac {dx} {dt} = 2y - x, frac {dy} {dt} = -y`.
|
||||
That second equation is variable-separable! We can turn it into `frac 1 y dy = -1 dt`, which integrates to `ln|y| = -t + C`, which solves to `y = C_1e^{-t}`.
|
||||
</p>
|
||||
<p>
|
||||
Now we substitute our known value for `y` into the first equation: `frac {dx} {dt} = C_1e^{-t} - x`. Does this look familiar? It can be rewritten as
|
||||
`frac {dx} {dt} + x = C_1e^{-t}`. This is linear with `p(t) = 1` and `q(t) = C_1e^{-t}`! `mu = e^{int p(t) dt} = C_2e^t`, so
|
||||
`C_2e^{t}x = int C_2 C_1 e^{t} e^t dt`. This simplifies to `e^{t}x = int C_1 dt`, and integrates to `e^{t}x = C_1t + C_3`. Finally,
|
||||
`x = C_1 t e^{-t} + C_3e^{-t}`. We can munge this into a solution: `X = C_1e^{-t} \[t, 1] + C_3e^{-t} \[1, 0]`. Note that this actually includes
|
||||
the solution we would have gotten from the eigenvector method, `Ce^{-t} \[1, 0]`. Beware: this method only works when either the first row of the matrix
|
||||
is `\[1, 0]` or the second row is `\[0, 1]`!
|
||||
</p>
|
||||
<h2>Final Notes</h2>
|
||||
<p>
|
||||
This one was pretty quick! Probably because this week only covered about two-thirds of the material we normally go through.
|
||||
Watch this space for a quiz 2 review; in the meantime, make sure to do all the worksheets and homeworks! We have our first round of serious homework
|
||||
due on Monday; it shouldn't be too bad, but do be careful not to leave it till the last minute. 'Till next time, auf wiedersehen, and have a good weekend!
|
||||
</p>
|
||||
[/]
|
||||
[=author "Tyler Clarke"]
|
||||
[=subject "Calculus"]
|
||||
[=date "2025-5-31"]
|
||||
[=title "Differential Equations Week 3"]
|
||||
|
||||
[#post.html]
|
||||
Reference in New Issue
Block a user