finally... week 2 review, done!
All checks were successful
Build / Build-Docker-Image (push) Successful in 49s

This commit is contained in:
2025-05-25 13:31:49 -04:00
parent 55a2c0de88
commit d0beab09c3
3 changed files with 230 additions and 0 deletions

View File

@@ -52,6 +52,8 @@
</div>
[/]
<p style="margin-top: auto;">
<a href="https://tutorial.math.lamar.edu/">Paul's Online Math Notes</a><br>
<a href="https://khanacademy.org/">Khan Academy</a><br><br>
Notice a mistake? Contact me at <a href="mailto:plupy44@gmail.com">plupy44@gmail.com</a><br>
<a href="[^baseurl]/contributing.html">How To Contribute</a><br>
Rendered with <a href="https://swaous.asuscomm.com/sitix">Sitix</a><br>

View File

@@ -1,5 +1,8 @@
[!]
[=post-]
<p>
<i>This post is part of a series on Differential Equations; you can read the next post in the series <a href="[^baseurl]/posts/differential-review-2.html">here</a>.</i>
</p>
<p>
Hello again! With the spector of quiz 1 looming on the horizon (it's tomorrow, May 22nd, in the usual studio), I figured I'd post some last-minute
review materials. If you haven't already read it, I'd recommend checking out <a href="[^baseurl]/posts/differential-review-1.html">differential week 1 review</a>, as

View File

@@ -0,0 +1,225 @@
[!]
[=post-]
<p>
Hello, everyone! It's only been a few days since I last screamed into the void here, but it feels like a proper eternity. Hard to believe we're only 2 weeks in!
The quiz Thursday went well! Nobody else wore funny hats, but I met someone who's been reading Deadly Boring Math (a fellow physics major, no less!), which was pretty cool.
We covered a massive amount of material this week, and are now equipped to solve some very fancy and complicated problems.
</p>
<h2>Uniqueness and Intervals of Existence</h2>
<p>
Particularly useful in our study of differential equations is the ability to determine whether or not a differential equation has a solution
<i>before solving it</i>. This allows us to skip solving an unsolvable equation, which is nice, but perhaps more importantly lets us determine
<i>where</i> a solution might exist: the <i>interval of existence</i>.
</p>
<p>
This is easiest to demonstrate for a linear differential equation. Given an equation in the form `frac {dy} {dt} + p(t)y = q(t)`,
the interval of existence is simply the intersection of the interval of existence of `p(t)` and that of `q(t)`. For example:
given the equation `frac {dy} {dx} + frac y t = ln(t + 2)`, we have `p(t) = frac 1 t` and `q(t) = ln(t + 2)`: `p(t)` is defined
for `-oo < t < 0, 0 < t < oo`, and `q(t)` is defined whenever `t + 2 > 0` or `t > -2` (the input to `ln` must always be positive),
so the interval of existence of our equation is `-2 < t < 0, 0 < t < oo`.
</p>
<p>
Knowing the interval of existence also tells us where a <i>unique solution</i> exists, given a starting point. For instance, if we
start at `t = -1` in the above example, the solution exists and is unique for `-2 < t < 0`, because that's the interval that contains
`-1`; if we instead start at `t = 1`, the solution exists and is unique for `0 < t < oo`. Just pick the interval that fits.
<b>Note that this theorem says nothing about non-existence</b>; there <i>might</i> be a solution for `t = 0` or `t = -3`, but
we can't say predict about it without actually finding the solution.
</p>
<p>
I won't include the proof for why this works as it's complicated and not particularly germane. The textbook does include it,
and if you like that sort of thing, it's worth checking out. As always, if you'd like a Deadly Boring explanation of any of the proofs
in this course, shoot me an email and I'll write a post about 'em!
</p>
<p>
What if the system isn't linear? Sadly, we don't actually have a way to find a rigid interval of existence; all we can do is say
that one definitely exists (note that we still can't say that one <i>doesn't</i> exist). To do this, get the system in the form
`frac {dy} {dx} = f(t, y)` and take the derivative `frac {df} {dy}`. If `f` and `frac {df} {dy}` are both continuous over some interval
in `y` that contains the initial value `y_0`, and over some other interval in `t` that contains the initial value `t_0`, then there
is a unique solution to the IVP somewhere inside those two intervals. We can't say <i>exactly</i> what interval this is- it could be
a wide variety of different intervals inside our bounds. What we <i>can</i> say for sure is that the interval exists and contains `t_0`, meaning it's reasonable
to continue.
</p>
<p>
Let's do an example. Given the particularly nasty and very much nonlinear equation `frac {dy} {dt} = frac {t^2 - 4t + 13} {y - 1}`,
is there a solution for the initial condition `y(3) = 4`?
</p>
<p>
In this case, it's quite clear that `f(t, y) = frac {t^2 - 4t + 13} {y - 1}`. The numerator here is defined for all `t`, and the rest is defined
for `-oo < y < 1, 1 < y < oo`. `frac {df} {dy}` initially looks difficult, but is in fact quite simple - this is a partial derivative,
so `t` is held constant. `frac {df} {dy} = -frac {t^2 - 4t + 13} {(y - 1)^2}`. This has exactly the same intervals. Hence, we can guarantee
a solution including `y(3) = 4`. Note that we <i>can't</i> say for sure that `y(a) = 1` doesn't have a solution, but we <i>can</i> say for sure
that it does not have a <i>unique</i> solution. If we wanted to know the exact interval of existence, we'd have to actually solve the equation.
</p>
<h2>Brief Aside: Some Distinctions</h2>
<p>
Linear ODEs are different from nonlinear ones for a wide variety of reasons. I won't get into all of them in detail, but here's a brief overview:
</p>
<ul>
<li>
Linear differential equations always have a general solution in terms of a single arbitrary constant. You can get every possible solution
by setting the arbitrary constant. For nonlinear equations, it's different - even if you have a solution with an arbitrary constant,
it's not guaranteed that you can find every solution by setting the arbitrary constant.
</li>
<li>
It's also not always possible to find an explicit solution to a nonlinear differential equation; oftentimes, the best you can get is
a homogeneous equation in the form `F(t, y) = 0`.
</li>
<li>
It often makes sense to use numerical or graphical methods when dealing with nonlinear ODEs, because they're often <i>much</i> harder to solve
than linear ODEs.
</li>
</ul>
<p>
<b>Note:</b> I have skipped all of section 2.5 as it's essentially a rehash of autonomous equations with a bit of modeling thrown in. We'll probably
revisit it later in a quiz or exam review; for now, I'd recommend closely watching the lecture on 2.5 and reading the textbook section.
</p>
<h2>Exact Equations</h2>
<p>
Exact equations are a particularly fascinating class of equation that can be solved trivially with a substitution. The core idea is to find
a function `w(x, y)` with partial derivatives `frac {dw} {dx}` and `frac {dw} {dy}` such that you can rewrite your differential equation
as `frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = 0`: this is obviously a sum of partial derivatives that actually just equals `frac {dw} {dx}`.
Integrating this will simply yield `w = c`, which you can then substitute.
</p>
<p>
It sounds confusing, but is really quite simple. For example, given the equation `2x + y^2 + 2xy frac {dy} {dx} = 0`,
if we can find a function `w` for which `frac {dw} {dx} = 2x + y^2` and `frac {dw} {dy} = 2xy`, we can rewrite as
`frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = frac {dw} {dx} = 0` and integrate to get `w = c`. Does such a function exist?
If you took multivariable recently, I can only imagine you share my current evil grin. This is just a <a href="[^baseurl]/posts/multivariable-thomas-16.3.html">potential-function problem</a>!
We know how to solve this!
</p>
<p>
The first step is to find the integral `int frac {dw} {dx} dx`. This is trivially found to be `w(x, y) = x^2 + xy^2 + C(y)`. We need to find what `C(y)` is. We have a convenient
`frac {dw} {dy}` hanging around that we can do algebra on, so let's differentiate with respect to `y` and compare: `frac {dw} {dy} = 2xy = frac {d} {dy} (x^2 + xy^2 + C(y))
= 2xy + C'(y)`. Doing some algebra tells us that `C'(y) = 0`, so `C(y) = C`. Substitute into the above equation to get `w(x, y) = x^2 + xy^2 + C`.
</p>
<p>
The partial derivatives of this are exactly what we need (you can check if you don't believe me; I'll wait). Following the
substitution procedure outlined above gives us `x^2 + xy^2 = C` - the explicit form of which is `y = sqrt(frac {C - x^2} {x})` for `x != 0, x < C`.
</p>
<p>
Exact equations are obviously quite powerful. <i>If</i> we can turn them into potential-function problems, we can find an analytical solution with very little effort!
Let's do a slightly harder problem (straight from the textbook). We need to find an exact solution to the differential equation `ycos(x) + 2xe^y + (sin(x) + x^2e^y - 1)frac {dy} {dx} = 0`.
</p>
<p>
It's immediately obvious that this is in the necessary form `frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = 0`. We have `frac {dw} {dx} = ycos(x) + 2xe^y` and
`frac {dw} {dy} = sin(x) + x^2e^y - 1`. Integrating `frac {dw} {dx}` in terms of `x` gives us `w = ysin(x) + x^2e^y + C(y)`, and taking the derivative
in terms of `y` yields simply `frac {dw} {dy} = sin(x) + x^2e^y + C'(y)`. We can thus construct the equation `sin(x) + x^2e^y + C'(y) = sin(x) + x^2e^y - 1`.
It is immediately obvious that `C'(y) = -1`, so `C(y) = -y + C`. Substitution gives us `w = ysin(x) + x^2e^y - y + C` - this produces the correct partial derivatives.
</p>
<p>
The last step is to simply substitute into the old equation, to get `ysin(x) + x^2e^y - y = C`. This is the correct solution! Using some trickery from
multivariable, we were able to solve a fiendishly difficult differential equation almost effortlessly.
</p>
<p>
It's obviously a good idea to use exact equations whenever possible. And the good news is- some inexact equations can actually be turned into exact
equations with a bit of algebra! The goal is to find some function `mu(x, y)`, an <i>integrating factor</i>, that can be multiplied into the
equation to make it exact (if this sounds familiar, it's because we do this every time we solve a linear differential equation).
</p>
<p>
Exact equations can only be solved via potential function magic if `frac {d} {dy} frac {dw} {dx} = frac {d} {dx} frac {dw} {dy}`. Hence, we're looking for
a `mu` where, given an equation `M(x, y) + N(x, y) frac {dy} {dx} = 0`, we get a `frac {d} {dy} mu M = frac {d} {dx} mu N` which is simpler to solve.
We can use the power rule to turn this into the equation `M frac {d mu} {dy} + mu frac {dM} {dy} = N frac {d mu} {dx} + mu frac {dN} {dx}`,
which can be rearranged to get `M frac {d mu} {dy} - N frac {d mu} {dx} + mu (frac {dM} {dy} - frac {dN} {dx}) = 0`. Unfortunately,
this is usually fairly difficult to solve, except in some special cases where `mu` depends only on one variable. In those cases, we immediately
know that the derivative of `mu` with respect to the <i>other</i> variable is always 0, which greatly simplifies the equation.
</p>
<p>
How can we find whether or not `mu` depends on a single variable? Clever substitution. If `mu` depends only on `x`, then `frac {d} {dy} (mu M) = mu frac {dM} {dy}`
(don't believe me? Try doing the power rule yourself - you'll get the same result!) and `frac {d} {dx} (mu N) = mu frac {dN} {dx} + N frac {d mu} {dx}`.
This is where the math gets really strange: we know that, for this to be valid, `frac {d} {dy} (mu M) = frac {d} {dx} (mu N)`, so we have an equation:
`frac {d} {dy} (mu M) = mu frac {dN} {dx} + N frac {d mu} {dx}`. Knowing what we know about the left side, we get `mu frac {dM} {dy} = mu frac {dN} {dx} + N frac {d mu} {dx}`.
This can be further boiled down to a differential equation `frac {d mu} {dx} = mu frac {frac {dM} {dy} - frac {dN} {dx}} {N}`. If the coefficient of `mu` on the
right hand side is a function in `x` only, then this is a very simple separable ODE, and we have our `mu`!
</p>
<p>
Let's do an example. Straight from the textbook, we're given an equation `3xy + y^2 + (x^2 + xy)y' = 0`. This cannot be solved directly; we need
to find an integrating factor. It's easy enough to read off that `M = 3xy + y^2` and `N = x^2 + xy`, which we can substitute into the above
equation to get `frac {d mu} {dx} = mu frac {3x + 2y - 2x - y} {x^2 + xy}`. Simplify to get `frac {d mu} {dx} = mu frac {x + y} {x^2 + xy} =
mu frac {x + y} {x(x + y)} = mu frac 1 x`. Our coefficient is in terms of `x` only - it's separable!
</p>
<p>
We can separate out to get `frac 1 { mu } d mu = frac 1 { x } dx`. Integrating this yields `ln|mu| = ln|x|` - `mu(x) = x`. Nice! I'm not
going to do that here, but if you're interested in some practice, you can multiply both sides of the equation by `mu` to get an exact equation that can be easily solved.
</p>
<h2>Substitution</h2>
<p>
There is a whole family of differential solving techniques hinging on the ability to turn a difficult equation into something simple. These are called <i>substitution methods</i>.
The general idea is that, given a differential equation in `y(x)`, you can come up with some function `v(y, x)` for which a substitution of `y` in terms of `v`
is simpler to solve than the original equation. Generally speaking, finding substitutions is hard; however, we have some known substitutions for specific situations that
can make things much simpler.
</p>
<p>
The first and simplest type of substitution is the <i>homogeneous</i> substitution. Homogeneous functions are identified by the identity `f(lambda x, lambda y) = lambda^k f(x, y)`:
in simple terms, a function is homogeneous if multiplying the arguments by some constant `lambda` has the same result as multiplying the function by the `k`th power of `lambda`.
`k` here is the <i>degree</i>. A <i>homogenous differential equation</i> is simply a differential equation in the form `M + N frac {dy} {dx} = 0`, where M and N
are both homogeneous functions with the same `k`.
</p>
<p>
In the case of a homogeneous differential equation, we can simplify it with the substitution `u = frac y x`, `y = ux`.
to get something variable-separable. I won't prove why this works here; however, it's really quite fascinating, and I highly recommend you
read the textbook or Paul's notes on the subject. Let's do an example. We're given an equation `frac {dy} {dx} = frac {x^2 - xy + y^2} {xy}`,
and we need to solve it as usual. This is not by itself solvable with any of the methods we've already learned. Let's make it homogeneous!
Rewriting as `- frac {x^2 - xy + y^2} {xy} + frac {dy} {dx} = 0` obviously yields a homogeneous differential equation with `M = - frac {x^2 - xy + y^2} {xy}`,
and `N = 1`. How do we know this is homogeneous? Substituting in `lambda x` and `lambda y` give us `N = lambda^0 1` and
`M = - frac {lambda^2 x^2 - lambda^2 xy + lambda^2 y^2} {lambda^2 xy} = - lambda^0 frac {x^2 - xy + y^2} {xy}`, so both M and N are homogeneous
functions with the same degree `k = 0`.
</p>
<p>
Using the substitution `y = ux`, and by power rule `frac {dy} {dx} = u + x frac {du} {dx}`, this equation becomes `- frac {x^2 - ux^2 + u^2x^2} {ux^2} + u + x frac {du} {dx} = 0`. This simplifies to
`- frac {1 - u + u^2} {u} + u + x frac {du} {dx} = 0`. One more step: `- frac {1 - u} {u} + x frac {du} {dx} = 0`. Start to look familiar? That's right, this is
separable. Some algebra gives us `frac 1 x dx = frac {u} {1 - u} du`. Integrate to get `ln|x| + c = -u - ln|1 - u|`. Now we can resubstitute `u = frac y x`,
yielding `-frac y x - ln|1 - frac y x| = ln|x| + c`. We can make this much prettier with a bit of algebra: `frac y x + ln|x - y| = c`.
</p>
<p>
Another special case of substitution is a <i>Bernoulli</i> equation. This is a differential equation in the form `frac {dy} {dt} + q(t)y = r(t)y^n`, where `n` is any real number.
In these cases, you first divide the entire equation by `y^n`, yielding `frac 1 {y^n} frac {dy} {dt} + frac 1 {y^{n-1}} q(t) = r(t)`, then use the substitution `u = y^(1 - n)`
to solve. Once again, the proof here is fascinating, and once again, I'm not going to go through it; <i>read the textbook</i>.
</p>
<p>
Let's do an example. Given a differential equation `frac {dy} {dt} + y = y^3`, solve for `y`. This is Bernoulli `n=3`, so our first step is the division:
`frac 1 {y^3} frac {dy} {dt} + frac {1} {y^2} = 1`. The substitution `u = y^(1 - n)` gives us `u = y^{-2}`, and `frac {du} {dt} = -2y^{-3} frac {dy} {dt}`
(why is `frac {dy} {dt}` here? It's because we find `frac {du} {dt}` with chain rule). We substitute this into the equation to get
`- frac 1 2 frac {du} {dt} + u = 1`, which can be algebra'd to get `frac {du} {dt} = 2(u - 1)`. A-ha! Separable! The result: `ln|u - 1| = 2t + C`.
Raise `e` and substitute back in, to get `y^{-2} = Ce^{2t} + 1`. Finally, do some algebra to get `y = sqrt(frac 1 {Ce^{2t} + 1})`. Not too hard!
</p>
<h2>Some Preview: Systems of Linear Differential Equations</h2>
<p>
Next week we'll be diving into systems of linear differential equations (we already covered some of them in Friday's lecture). Just like systems of linear equations in algebra, systems of linear <i>differential</i>
equations consist of several different interdependent differential equations: e.g., you can't solve one without knowing the solutions
to the others. These systems of linear ODEs can be <i>autonomous</i>, just like any other ODE; they can be graphed in a wide variety of ways,
and they can be exponentially solved.
</p>
<p>
One of the most useful properties of SLDEs (Systems of Linear Differential Equations) is that they can be written in a matrix form. Generally, given a system
`frac {dx} {dt} = a_1x + b_1y + c_1`, `frac {dy} {dt} = a_2x + b_2y + c_2`, you can rewrite as `\[x', y'] = \[\[a_1, b_1], \[a_2, b_2]] cdot \[x, y] + \[c_1, c_2]`.
This is important! In the case where `c = \[c_1, c_2] = 0`, this system is considered to be <i>homogeneous</i>, and you can trivially solve it simply by
finding the eigenvalues and eigenvectors of the constant matrix. Given two eigenvalues `lambda_1`, `lambda_2` with corresponding eigenvectors
`hat v_1`, `hat v_2`, the general solution is simply the <i>linear combination</i> `\[x, y] = d_1 e^{lambda_1 t} hat v_1 + d_2 e^{lambda_2 t} hat v_2`.
</p>
<p>
Let's do a quick example. Given the equations `frac {dx} {dt} = 3x + 4y` and `frac {dy} {dt} = -7x - y`, we can write the matrix form
`hat x' = \[\[-3, 1], \[0, -1]] hat x`, where `hat x = \[x, y]`. The eigenvalues of this are easily found to be -3 and -1, with corresponding
eigenvectors `\[1, 0]` and `\[1, 2]`. This means the general solution is the linear combination `\[x, y] = \[1, 0] e^{-3t} + \[1, 2] e^{-t}`.
</p>
<p>
SLDEs are fascinating, and we have quite a lot more ground to cover. I'll leave that for next week.
</p>
<h2>Final Notes</h2>
<p>
The first two homeworks are due tomorrow. If you haven't done them, make sure to ASAP! They're not very hard. This week is going to be pretty quiet,
but <i>next</i> Thursday (June 3rd) we've a quiz in studio, and then the monday immediately following is our first midterm. Watch this space for
review material pertaining to both!
</p>
<p>
This week has been proof-heavy. I usually omit the more complicated proofs (see: all of them) to keep these posts concise rather than thorough, but if anyone would like that
to change, shoot me an email!
</p>
<p>
I think that's everything. See you next weekend!
</p>
[/]
[=author "Tyler Clarke"]
[=date "2025-5-25"]
[=subject "Calculus"]
[=title "Differential Equations Week 2"]
[#post.html]