post!
All checks were successful
Build / Build-Docker-Image (push) Successful in 47s

This commit is contained in:
2025-06-23 11:54:56 -04:00
parent 0fae262081
commit cc324a1046
3 changed files with 202 additions and 0 deletions

View File

@@ -0,0 +1,79 @@
[!]
[=post-]
<p>
Hello, everyone! I'm a bit late to this one; yesterday was pretty hectic. Fortunately, this quiz is only on 4.1 to 4.3 - at the risk of taunting Murphy, it should be a milk run.
</p>
<h2>WS4.1.1: Classification</h2>
<p>
The only question in worksheet 4.1 is just a classification task. We're given a bunch of equations and asked if they're linear or nonlinear, and if linear, if they're
homogeneous or nonhomogeneous.
</p>
<ol>
<li>`y'' - 2y' - 2y = 0`: This is clearly linear; it's in the form `l(t)y'' + p(t)y' + q(t)y = r(t)`. It's also homogeneous, because `r(t)` is 0.</li>
<li>`2y'' + 2y' + 3y = 0`: This is in the same form, and `r(t)` is still `0`, so this is a linear homogeneous differential equation.</li>
<li>`y''' - y'' = 6`: This is still linear! The `y'''` term might be a bit disconcerting, but the equation is still in the third-order linear form.
It's nonhomogeneous because `6` is not equal to `0`.
</li>
<li>`y'' - 2y' + 2y = e^x tan(x)`: Linear nonhomogeneous. Note that if the right-hand side had any mention of `y` it might not be linear; in this case, though
everything works out.
</li>
<li>
`y' y'' = 4x`: This isn't linear; it require either the `y'` or the `y''` coefficient to be in terms of the other, which isn't possible.
</li>
<li>
`2y'' = 3y^2`: This also isn't linear. Even if we rewrite like `2y'' - 3y^2 = 0`, we have a pesky `y^2` term that can't exist in a linear differential equation.
</li>
</ol>
<h2>WS4.2.1.a: Intervals of Existence</h2>
<p>
Finally we're doing existence in second-order linear DEs! The idea is pretty much the same as first-order: having an equation in the form `y'' + p(t)y' + q(t)y = r(t)`,
the interval of existence is the intersection of the intervals of existence of `p(t)`, `q(t)`, and `r(t)`. We handle IVPs much the same way, as well: the interval around
our initial point `t_0` is the interval of existence for that specific solution.
</p>
<p>
We're given the IVP `t(t - 4)y'' + 3ty' + 4y = 2, y(3) = 0, y'(3) = 1`. This is clearly a nonhomogeneous 2OLDE; to find our intervals of existence, we need to get the coefficient
of `y''` to be just `1`. We do this by dividing the entire equation by `t(t - 4)`: `y'' + frac 3 {t - 4} y' + frac 4 {t(t - 4)} y = frac 2 {t(t - 4)}`. Clearly,
`p(t) = frac 3 {t - 4}`, `q(t) = frac 4 {t(t - 4)}`, and `r(t) = frac 2 {t(t - 4)}`. `q(t)` and `r(t)` have the same interval of existence; they're defined whenever
`t(t - 4)` is nonzero, so `(-oo, 0) cup (0, 4) cup (4, oo)`. `p(t)` is defined for `(-oo, 4) cup (4, oo)`. Since our starting point is `t=3`, it falls inside the
`(0, 4)` range - meaning the interval of existence of our solution is `0 < t < 4`.
</p>
<h2>WS4.2.2: Solving Some Coefficients</h2>
<p>
This question is fairly cryptic; as best as I can tell, it's asking us to find some variant of `y = c_1 e^{x} + c_2 e^{-x}` that matches the IVP
`y'' - y = 0, y(0) = 0, y'(0) = 1`. Because we already know `y` in terms of some arbitrary constants, we can immediately skip to solving the IVP:
differentiating gives us `y' = c_1 e^x - c_2 e^{-x}`, and substituting into our equations for `y` and `y'` respectively gives us
`0 = c_1 + c_2, 1 = c_1 - c_2`. These easily solve to `c_1 = frac 1 2, c_2 = - frac 1 2`, so our final result is `y = frac 1 2 e^x - frac 1 2 e^{-x}`.
</p>
<h2>WS4.3.1: Solving A 2OLDE</h2>
<p>
Finally, some solving! We're given the pleasant linear homogeneous equation `2y'' - 5y' - 3y = 0`, and asked to solve and graph. I'm not going to post the graphs here,
but you should do them for practice!
</p>
<p>
Because this is homogeneous and has constant coefficients, we can instantly turn it into an <i>auxiliary</i> polynomial, find the roots `lambda_1` and `lambda_2`, and
substitute them into the general solution `y = c_1 e^{lambda_1 t} + c_2 e^{lambda_2 t}`. For an equation `a y'' + by' + cy = 0`, the auxiliary polynomial is
`a lambda^2 + b lambda + c = 0`, so we have `2 lambda^2 - 5 lambda - 3 = 0`. This solves easily to `lambda_1 = - frac 1 2`, `lambda_2 = 3`. Substituting these in
gives us `y = c_1 e^{- frac 1 2 t} + c_2 e^{3t}`.
</p>
<p>
Remember that phase portraits are parametric graphs in terms of `y` and `y'`, and solution graphs are normal solution graphs in terms of `y` and `t`.
You can find `y'` by differentiating `y` if you didn't use a technique that provides `y'` as a side effect.
</p>
<h2>SA36: Verifying Fundamental Sets</h2>
<p>
This isn't hard at all, but it's important to understand. We're given an equation `x^2y'' - 6xy' + 12y = 0`, and asked to verify that `y_1(x) = x^3, y_2(x) = x^4` is a fundamental
solution set. The idea is that two solutions are a fundamental set if the corresponding SLDE matrix they produce is not singular; it's probably easier to
just do the question than to explain the process. Our first step is to verify that these are actually solutions: the substitution is fairly simple, and I'll leave it as an exercise
to the reader.
We now take the derivatives `y_1' = 3x^2` and `y_2' = 4x^3` and populate a matrix:
`\[\[x^3, x^4], \[3x^2 + 4x^3]]`. Note that this is actually an SLDE matrix. As long as the determinant of this is nonzero, we have a linearly independent solution.
The determinant is `x^6`, which is not always 0, so the solution is linearly independent. Easy-peasy!
</p>
<h2></h2>
[/]
[=author "Tyler Clarke"]
[=date "2025-6-17"]
[=subject "Calculus"]
[=title "Differential Quiz 3"]
[=unpub]
[#post.html]

View File

@@ -1,5 +1,8 @@
[!]
[=post-]
<p>
<i>This is part of a series. You can read the next post <a href="[^baseurl]/posts/differential-review-6.html">here</a>.</i>
</p>
<p>
Hello, dear readers! This post is a bit strange, even by Deadly Boring standards; because of an exam on June 9th I was unable to write the week 4 review,
so this post covers all of week 4 and week 5 together.

View File

@@ -0,0 +1,120 @@
[!]
[=post-]
<p>
Welcome back for yet another week! We're officially in the latter half of the semester now, and things are heating up. We have yet another quiz
on Thursday, June 26th; the second midterm is in just two and a half weeks on Wednesday, July 9th. The semester ends with our very long final exam on July 31st,
so the light at the end of the tunnel is getting close!
</p>
<p>
The quiz was entirely Cauchy-Euler equations. I do believe this is foreshadowing; either way, it's probably a war crime. If you had trouble with allat Cauchy-Euler,
I highly recommend reading the <a href="https://tutorial.math.lamar.edu/classes/de/eulerequations.aspx">Paul's Notes</a> on the subject - he has a trick to solve
most Cauchy-Euler problems in two or three simple steps.
</p>
<h2>Some More Oscillations</h2>
<p>
In week 5, we covered a bunch of oscillation material - specifically, damped ones. While we can easily solve the basic problems, some more complicated problems with
a forcing term require a more serious consideration of the structure. The general equation for an oscillating system is `m y'' + b y' + k y = F(t)`, where `m` is mass,
`b` is the <i>damping coefficient</i>, `k` is the <i>spring constant</i> (from Hooke's law!), and `F(t)` is some external force. This is also the general formulation
for an SLDE. More useful because of how it simplifies some math we'll get to in a second is `y'' + 2 delta y' + omega_0^2 y = f(t)`, where `delta = frac b {2m}`, `omega_0^2 = frac k m`, and
`f(t) = frac {F(t)} m`. Generally, we expect `f(t)` to be a linear combination of `A cos(omega t)` and `A sin(omega t)` where `omega` is frequency and `A` is amplitude.
The most useful variant is `f(t) = Ae^{i omega t} = Acos(omega t) + i A sin(omega t)` because we can represent both trigonometric functions with a single, easily-mathed-upon exponential.
</p>
<p>
Generally speaking, the easiest way to solve vibration systems is with the method of undetermined coefficients. Finding the general solution is done
as normal; for a general vibration system in the form `y'' + 2 delta y' + omega_0^2 y = Ae^{i omega t}`, the particular solution is found by plugging
into the handy formula `y_p = frac {Ae^{i omega t}} {-omega^2 + 2 delta i omega + omega_0^2}`.
</p>
<p>
Let's do an example. The textbook gives us the pleasant equation `y'' + frac 1 8 y' + y = 3 cos(omega t)`. I won't bore you with the steps to find the general solution; it's
`e^{- frac t {16}} (2 cos(frac {sqrt(255)} {16} t) + frac 2 {sqrt(255)} sin(frac {sqrt(255)} {16} t))`. To solve the particular solution, we need to first
determine what `delta`, `omega_0`, and `A` are. In this case, we can just read off: `2 delta = frac 1 8`, `omega_0^2 = 1`, `A = 3` so `delta = frac 1 {16}` and `omega_0 = 1`.
Plugging into the formula above gives us `frac {3e^{i omega t}} {-omega^2 + frac {i omega} 8 + 1}`.
</p>
<p>
Now we do an unpleasant little trick: because we know that `cos(omega t) = Re(e^{i omega t})`, we take the real part of the above formula to get `y_p`.
To do <i>that</i>, we first need to get `i` out of the denominator: the complex conjugate of the denominator is `-omega^2 + 1 - frac {i omega} 8`, so multiplying by
`frac {-omega^2 + 1 - frac {i omega} 8} {-omega^2 + 1 - frac {i omega} 8}` yields a formula with a completely real denominator:
`frac {(-omega^2 + 1 - frac {i omega} 8) (3cos(omega t) + 3i sin(omega t))} {(1 - omega^2)^2 + frac {o^2} {64}}`.
</p>
<p>
Isolating the real part is pretty easy: we get `frac {-3 omega^2 cos(omega t) + 3cos(omega t) + frac {3 omega sin(omega t)} 8} {(1 - omega^2)^2 + frac {o^2} {64}}`.
Hence, our final solution is `y = e^{- frac t {16}} (2 cos(frac {sqrt(255)} {16} t) + frac 2 {sqrt(255)} sin(frac {sqrt(255)} {16} t)) + frac {-3 omega^2 cos(omega t) + 3cos(omega t) + frac {3 omega sin(omega t)} 8} {(1 - omega^2)^2 + frac {o^2} {64}}`.
Yikes.
</p>
<h2>Variation of Parameters</h2>
<p>
Note: I skipped over resonance as it doesn't seem that we're doing much with it in this course. If anyone wants to see a deeper exploration of oscillations, shoot me an email!
(Or just read the textbook.)
</p>
<p>
Finally we're on variation of parameters! This is a very nice way to solve a wide variety of 2OLDEs, without relying on the messy algebra of undetermined coefficients.
The drawback is that we have to do integration. Given an equation in the form `y'' + q(t) y' + r(t) y = g(t)`, we first find the solution to the
homogeneous case `y'' + q(t) y' + r(t) y = 0` in the form `y_g = p(t) vec y_1 + q(t) vec y_2`, then find `W`, the determinant of the matrix `\[vec y_1, vec y_2]`,
then substitute into the formula `y_p(t) = y_2 int frac {y_1 g(t)} {W} dt - y_1 int frac {y_2 g(t)} { W } dt`. Finally, use superposition just like in undetermined coefficients
to get `y = y_g + y_p`. The derivation of this is fascinating, but I'm not going to
include it here; I highly recommend reading the Paul's notes.
</p>
<p>
Let's do an example. Given `y'' - 2y' + y = frac {e^t} {t^2 + 1}`, find a general solution. Our first step is going to be to find the `y_h` homogeneous solution to
`y'' - 2y' + y = 0`: this can be found the Normal Way to be `y_h = c_1 e^t + c_2 t e^t` (note the repeated roots). There's a twist: to use variation of parameters,
we need to have a vector solution - we need to know `Y = \[y_h, y_h']`, not just `y_h`. Fortunately, this is very easy to find: `y_h' = c_1e^t + c_2 e^t + c_2 t e^t`.
Thus, we have `Y = c_1 \[e^t, e^t] + c_2 \[t e^t, e^t + t e^t]`. Note that `y_1 = e^t, y_2 = t e^t`, <i>not</i> the full vectors - that actually tripped me up while writing this post.
</p>
<p>
To find `W`, we need to take the determinant of `\[\[e^t, t e^t], \[e^t, e^t + t e^t]]`. This is pretty easy: it's `W = e^{2t}`. We know also that `g(t) = frac {e^t} {t^2 + 1}`,
so we can substitute into our general form to get `y_p = t e^t int frac { e^{2t} } { e^2t (t^2 + 1) } dt - e^t int frac { t e^{2t} } { e^{2t}(t^2 + 1) } dt`.
Integrating this isn't too hard, but it <i>is</i> somewhat tedious: I won't bore you with the details;
the final result is `y_p = t e^t atan(t) - frac 1 2 e^t ln|t^2 + 1|`.
(note that I ignored the constants of integration: this is because they would multiply out to be constant multiples of `y_1` and `y_2`, which would be redundant in our final solution).
</p>
<p>
Finally, we put the pieces together: `y = c_1 e^t + c_2 t e^t + t e^t atan(t) - frac 1 2 e^t ln|t^2 + 1|`. Nice! This required a lot less algebra than UC would, and is
pretty elegant. I would argue that, if you can memorize the
formula, Variation of Parameters is much quicker and easier than UC, but it depends on the person and the problem.
</p>
<h2>Enter Laplace</h2>
<p>
A new chapter! This one is gonna be fun. We're finally introducing Laplace transforms! The core idea of Laplace is the same as every other substitution or rewriting method
in calculus: that we can solve a hard problem by hoisting the problem to a system where it's simpler, then solve the simple problem, then drop the solution back to our old
system. It's really very elegant. Moving a difficult differential problem from `t`-space, being our normal representation, to `s`-space, being the Laplace transform's space,
allows us to work out a solution as simple algebra. This eliminates many of the problems with solving complex differential equations.
</p>
<p>
Given a function `f(t)` that exists for every `t` where `0 <= t < oo`, the Laplace transform is `F(s) = lim_{A -> oo} int_0^{A} e^{-st} f(t) dt`. This is bulky, so there's a shorthand:
`F(s) = L{f(t)} = int_0^{oo} e^{-st} f(t) dt` (note: the L is supposed to be curly, but I'm using ASCIIMath and there's no support for that symbol). There
are a bunch of well-known Laplacian functions that are worthwhile to memorize, such as `L(1) = frac 1 s`; I'm not going to go through all of them, but I recommend
consulting the textbook for this. It's useful to have them memorized so you don't have to calculate them on your own every time.
</p>
<p>
Note that `L(a + b) = L(a) + L(b)`. This is immediately obvious if you substitute into the expanded formula:
`lim_{A -> oo} int_0^{A} e^{-st} (a + b) dt = lim_{A -> oo} int_0^{A} a e^{-st} + b e^{-st} dt = lim_{A -> oo} int_0^{A} a e^{-st} dt + int_0^A b e^{-st} dt = lim_{A -> oo} int_0^{A} a e^{-st} dt + lim_{B -> oo} int_0^B b e^{-st} dt`
Note also that constants propagate out of the Laplace transform. These two facts mean that the Laplace transform is a <i>linear operator</i>.
</p>
<p>
Laplace transforms can also be applied to <i>piecewise continuous functions</i>, meaning they can be used to solve problems with essentially any nonsmooth function that is defined
on `\[0, oo)`. This is done by breaking up the integral. For instance, for a piecewise function defined on `\[0, 4)` and `\[4, oo)` separately, your Laplace transform
will look like `F(s) = lim_{A -> oo} int_0^4 e^{-st} f(t) dt + int_4^A e^{-st} f(t) dt`.
</p>
<p>
Note that Laplace transforms are not guaranteed to exist. The function `f(t)` has to be convergent and defined on all `\[0, oo)`, <i>and</i> has to be of exponential order -
it has to be <i>bounded</i> by an exponential function, meaning `f(t) <= Ke^{at}` for some finite real `K` and `a`. Note that this doesn't have to be true over the entire
set - as long as it <i>eventually</i> fits the bound, we're fine. I won't cover how to check or prove this here.
</p>
<h2>Final Notes</h2>
<p>
It's been an exciting week! We covered a lot of material that I was very, very excited for, and there's more around the corner (solving differential equations
with Laplace transforms is gonna be fun). As previously mentioned, we have a quiz coming up; I'll post some review materials here tomorrow or Wednesday.
</p>
<p>
For unpleasant personal reasons, I wasn't able to post any review materials for quiz 3. My draft isn't complete enough to post: if anyone's interested in posthumous study resources,
shoot me an email and I'll finish 'em up.
</p>
<p>
That's everything for now. See ya later, and good luck!
</p>
[/]
[=author "Tyler Clarke"]
[=date "2025-6-23"]
[=subject "Calculus"]
[=title "Differential Equations Week 6"]
[#post.html]