Files
deadlyboringmath.us/site/posts/differential-review-6.html
Tyler Clarke 72bdb769b3
All checks were successful
Build / Build-Docker-Image (push) Successful in 46s
quiz 4 prep
2025-06-26 09:50:23 -04:00

123 lines
10 KiB
HTML

[!]
[=post-]
<p>
<i>This post is part of a series. You can read the next post <a href="[^baseurl]/posts/differential-quiz-4.html">here</a>.</i>
</p>
<p>
Welcome back for yet another week! We're officially in the latter half of the semester now, and things are heating up. We have yet another quiz
on Thursday, June 26th; the second midterm is in just two and a half weeks on Wednesday, July 9th. The semester ends with our very long final exam on July 31st,
so the light at the end of the tunnel is getting close!
</p>
<p>
The quiz was entirely Cauchy-Euler equations. I do believe this is foreshadowing; either way, it's probably a war crime. If you had trouble with allat Cauchy-Euler,
I highly recommend reading the <a href="https://tutorial.math.lamar.edu/classes/de/eulerequations.aspx">Paul's Notes</a> on the subject - he has a trick to solve
most Cauchy-Euler problems in two or three simple steps.
</p>
<h2>Some More Oscillations</h2>
<p>
In week 5, we covered a bunch of oscillation material - specifically, damped ones. While we can easily solve the basic problems, some more complicated problems with
a forcing term require a more serious consideration of the structure. The general equation for an oscillating system is `m y'' + b y' + k y = F(t)`, where `m` is mass,
`b` is the <i>damping coefficient</i>, `k` is the <i>spring constant</i> (from Hooke's law!), and `F(t)` is some external force. This is also the general formulation
for an SLDE. More useful because of how it simplifies some math we'll get to in a second is `y'' + 2 delta y' + omega_0^2 y = f(t)`, where `delta = frac b {2m}`, `omega_0^2 = frac k m`, and
`f(t) = frac {F(t)} m`. Generally, we expect `f(t)` to be a linear combination of `A cos(omega t)` and `A sin(omega t)` where `omega` is frequency and `A` is amplitude.
The most useful variant is `f(t) = Ae^{i omega t} = Acos(omega t) + i A sin(omega t)` because we can represent both trigonometric functions with a single, easily-mathed-upon exponential.
</p>
<p>
Generally speaking, the easiest way to solve vibration systems is with the method of undetermined coefficients. Finding the general solution is done
as normal; for a general vibration system in the form `y'' + 2 delta y' + omega_0^2 y = Ae^{i omega t}`, the particular solution is found by plugging
into the handy formula `y_p = frac {Ae^{i omega t}} {-omega^2 + 2 delta i omega + omega_0^2}`.
</p>
<p>
Let's do an example. The textbook gives us the pleasant equation `y'' + frac 1 8 y' + y = 3 cos(omega t)`. I won't bore you with the steps to find the general solution; it's
`e^{- frac t {16}} (2 cos(frac {sqrt(255)} {16} t) + frac 2 {sqrt(255)} sin(frac {sqrt(255)} {16} t))`. To solve the particular solution, we need to first
determine what `delta`, `omega_0`, and `A` are. In this case, we can just read off: `2 delta = frac 1 8`, `omega_0^2 = 1`, `A = 3` so `delta = frac 1 {16}` and `omega_0 = 1`.
Plugging into the formula above gives us `frac {3e^{i omega t}} {-omega^2 + frac {i omega} 8 + 1}`.
</p>
<p>
Now we do an unpleasant little trick: because we know that `cos(omega t) = Re(e^{i omega t})`, we take the real part of the above formula to get `y_p`.
To do <i>that</i>, we first need to get `i` out of the denominator: the complex conjugate of the denominator is `-omega^2 + 1 - frac {i omega} 8`, so multiplying by
`frac {-omega^2 + 1 - frac {i omega} 8} {-omega^2 + 1 - frac {i omega} 8}` yields a formula with a completely real denominator:
`frac {(-omega^2 + 1 - frac {i omega} 8) (3cos(omega t) + 3i sin(omega t))} {(1 - omega^2)^2 + frac {o^2} {64}}`.
</p>
<p>
Isolating the real part is pretty easy: we get `frac {-3 omega^2 cos(omega t) + 3cos(omega t) + frac {3 omega sin(omega t)} 8} {(1 - omega^2)^2 + frac {o^2} {64}}`.
Hence, our final solution is `y = e^{- frac t {16}} (2 cos(frac {sqrt(255)} {16} t) + frac 2 {sqrt(255)} sin(frac {sqrt(255)} {16} t)) + frac {-3 omega^2 cos(omega t) + 3cos(omega t) + frac {3 omega sin(omega t)} 8} {(1 - omega^2)^2 + frac {o^2} {64}}`.
Yikes.
</p>
<h2>Variation of Parameters</h2>
<p>
Note: I skipped over resonance as it doesn't seem that we're doing much with it in this course. If anyone wants to see a deeper exploration of oscillations, shoot me an email!
(Or just read the textbook.)
</p>
<p>
Finally we're on variation of parameters! This is a very nice way to solve a wide variety of 2OLDEs, without relying on the messy algebra of undetermined coefficients.
The drawback is that we have to do integration. Given an equation in the form `y'' + q(t) y' + r(t) y = g(t)`, we first find the solution to the
homogeneous case `y'' + q(t) y' + r(t) y = 0` in the form `y_g = p(t) vec y_1 + q(t) vec y_2`, then find `W`, the determinant of the matrix `\[vec y_1, vec y_2]`,
then substitute into the formula `y_p(t) = y_2 int frac {y_1 g(t)} {W} dt - y_1 int frac {y_2 g(t)} { W } dt`. Finally, use superposition just like in undetermined coefficients
to get `y = y_g + y_p`. The derivation of this is fascinating, but I'm not going to
include it here; I highly recommend reading the Paul's notes.
</p>
<p>
Let's do an example. Given `y'' - 2y' + y = frac {e^t} {t^2 + 1}`, find a general solution. Our first step is going to be to find the `y_h` homogeneous solution to
`y'' - 2y' + y = 0`: this can be found the Normal Way to be `y_h = c_1 e^t + c_2 t e^t` (note the repeated roots). There's a twist: to use variation of parameters,
we need to have a vector solution - we need to know `Y = \[y_h, y_h']`, not just `y_h`. Fortunately, this is very easy to find: `y_h' = c_1e^t + c_2 e^t + c_2 t e^t`.
Thus, we have `Y = c_1 \[e^t, e^t] + c_2 \[t e^t, e^t + t e^t]`. Note that `y_1 = e^t, y_2 = t e^t`, <i>not</i> the full vectors - that actually tripped me up while writing this post.
</p>
<p>
To find `W`, we need to take the determinant of `\[\[e^t, t e^t], \[e^t, e^t + t e^t]]`. This is pretty easy: it's `W = e^{2t}`. We know also that `g(t) = frac {e^t} {t^2 + 1}`,
so we can substitute into our general form to get `y_p = t e^t int frac { e^{2t} } { e^2t (t^2 + 1) } dt - e^t int frac { t e^{2t} } { e^{2t}(t^2 + 1) } dt`.
Integrating this isn't too hard, but it <i>is</i> somewhat tedious: I won't bore you with the details;
the final result is `y_p = t e^t atan(t) - frac 1 2 e^t ln|t^2 + 1|`.
(note that I ignored the constants of integration: this is because they would multiply out to be constant multiples of `y_1` and `y_2`, which would be redundant in our final solution).
</p>
<p>
Finally, we put the pieces together: `y = c_1 e^t + c_2 t e^t + t e^t atan(t) - frac 1 2 e^t ln|t^2 + 1|`. Nice! This required a lot less algebra than UC would, and is
pretty elegant. I would argue that, if you can memorize the
formula, Variation of Parameters is much quicker and easier than UC, but it depends on the person and the problem.
</p>
<h2>Enter Laplace</h2>
<p>
A new chapter! This one is gonna be fun. We're finally introducing Laplace transforms! The core idea of Laplace is the same as every other substitution or rewriting method
in calculus: that we can solve a hard problem by hoisting the problem to a system where it's simpler, then solve the simple problem, then drop the solution back to our old
system. It's really very elegant. Moving a difficult differential problem from `t`-space, being our normal representation, to `s`-space, being the Laplace transform's space,
allows us to work out a solution as simple algebra. This eliminates many of the problems with solving complex differential equations.
</p>
<p>
Given a function `f(t)` that exists for every `t` where `0 <= t < oo`, the Laplace transform is `F(s) = lim_{A -> oo} int_0^{A} e^{-st} f(t) dt`. This is bulky, so there's a shorthand:
`F(s) = L{f(t)} = int_0^{oo} e^{-st} f(t) dt` (note: the L is supposed to be curly, but I'm using ASCIIMath and there's no support for that symbol). There
are a bunch of well-known Laplacian functions that are worthwhile to memorize, such as `L(1) = frac 1 s`; I'm not going to go through all of them, but I recommend
consulting the textbook for this. It's useful to have them memorized so you don't have to calculate them on your own every time.
</p>
<p>
Note that `L(a + b) = L(a) + L(b)`. This is immediately obvious if you substitute into the expanded formula:
`lim_{A -> oo} int_0^{A} e^{-st} (a + b) dt = lim_{A -> oo} int_0^{A} a e^{-st} + b e^{-st} dt = lim_{A -> oo} int_0^{A} a e^{-st} dt + int_0^A b e^{-st} dt = lim_{A -> oo} int_0^{A} a e^{-st} dt + lim_{B -> oo} int_0^B b e^{-st} dt`
Note also that constants propagate out of the Laplace transform. These two facts mean that the Laplace transform is a <i>linear operator</i>.
</p>
<p>
Laplace transforms can also be applied to <i>piecewise continuous functions</i>, meaning they can be used to solve problems with essentially any nonsmooth function that is defined
on `\[0, oo)`. This is done by breaking up the integral. For instance, for a piecewise function defined on `\[0, 4)` and `\[4, oo)` separately, your Laplace transform
will look like `F(s) = lim_{A -> oo} int_0^4 e^{-st} f(t) dt + int_4^A e^{-st} f(t) dt`.
</p>
<p>
Note that Laplace transforms are not guaranteed to exist. The function `f(t)` has to be convergent and defined on all `\[0, oo)`, <i>and</i> has to be of exponential order -
it has to be <i>bounded</i> by an exponential function, meaning `f(t) <= Ke^{at}` for some finite real `K` and `a`. Note that this doesn't have to be true over the entire
set - as long as it <i>eventually</i> fits the bound, we're fine. I won't cover how to check or prove this here.
</p>
<h2>Final Notes</h2>
<p>
It's been an exciting week! We covered a lot of material that I was very, very excited for, and there's more around the corner (solving differential equations
with Laplace transforms is gonna be fun). As previously mentioned, we have a quiz coming up; I'll post some review materials here tomorrow or Wednesday.
</p>
<p>
For unpleasant personal reasons, I wasn't able to post any review materials for quiz 3. My draft isn't complete enough to post: if anyone's interested in posthumous study resources,
shoot me an email and I'll finish 'em up.
</p>
<p>
That's everything for now. See ya later, and good luck!
</p>
[/]
[=author "Tyler Clarke"]
[=date "2025-6-23"]
[=subject "Calculus"]
[=title "Differential Equations Week 6"]
[#post.html]