This commit is contained in:
71
site/posts/differential-week-10.html
Normal file
71
site/posts/differential-week-10.html
Normal file
@@ -0,0 +1,71 @@
|
||||
[!]
|
||||
[=post-]
|
||||
<p>
|
||||
We're getting pretty close to the end! The final exam, on the 31st, is less than two weeks away. Light at the end of the tunnel...
|
||||
</p>
|
||||
<h2>Euler's Method</h2>
|
||||
<p>
|
||||
This is by far the simplest way to numerically calculate any first-order differential equation in the form `frac {dy} {dx} = f(x, y)`. It has the added benefit that it scales to
|
||||
essentially arbitrary accuracy, and can be performed trivially by computers. The idea is <i>very</i> simple: pick a value `h` to be your step size (smaller = more accurate),
|
||||
and take the sum of the differential equation at every step in your target range. Voilà, the solution! Mathematically: given `y(x)`, `y(x + h) = y(x) + h frac {dy} {dx} (x, y)` within
|
||||
some margin of error. This allows you to find the value of `y` at any arbitrary `x` given an initial value and a differential equation, <i>without solving the differential equation</i>!
|
||||
</p>
|
||||
<p>
|
||||
To keep track of the process, Euler's method is usually used in a table like so:
|
||||
</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<td>`n`</td>
|
||||
<td>`x`</td>
|
||||
<td>`y`</td>
|
||||
<td>`frac {dy} {dx}`</td>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>`0`</td>
|
||||
<td>`x + 0h`</td>
|
||||
<td>`y(0) + h frac {dy} {dx} (x + 0h, y(x))`</td>
|
||||
<td>`frac {dy} {dx}`</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>
|
||||
With an added row for each step. This is tedious and not particularly interesting, so I'm not going to do an example.
|
||||
</p>
|
||||
<p>
|
||||
Euler's method has two variants: the <i>explicit</i>, or forward, method we've covered, and the <i>implicit</i>, or backward, method. The implicit method is conceptually
|
||||
and practically more difficult, but is still useful: the explicit method will usually <i>over</i>estimate for a curve opening down, while the implicit method will <i>under</i>estimate.
|
||||
It is relatively stable, especially for large `n`.
|
||||
</p>
|
||||
<p>
|
||||
Implicit Euler can find `y_{n + 1}` simply by solving the equation `y_{n + 1} - h y_{n + 1} (1 - frac {y_{n + 1}} {Y_0}) - y_n = 0`. This is extremely inconvenient. Note that, if we let
|
||||
`y_{n + 1} = q` and expand, we get `q^2 frac {h} {Y_0} + (1 - h)q - y_n = 0`, which is a solvable quadratic.
|
||||
</p>
|
||||
<p>
|
||||
Luckily, 8.2 won't be on the exam.
|
||||
</p>
|
||||
<h2>Improved Euler and Runge-Katta</h2>
|
||||
<p>
|
||||
Euler goes further! We already know that, given `frac {dy} {dt} = f(y, t)`, `y_{n + 1} = y_n + h f(y_n, t_n)`: this is actually a specific case of `y_{n + 1} = y_n + int_{t_n}^{t_{n + 1}} f(y(t), t) dt`,
|
||||
using the approximation `f(y(t), t) = f(y_n, t_n)`.
|
||||
This integral equation can't be solved without an approximation, but `f(y_n, t_n)` is just one of several possibilities. It consistently overshoots or undershoots any nonstraight curve!
|
||||
A better way to do this is to assume that the average derivative over a step is close to the average of the derivative at the start and at the end: `frac {f(y_n, t_n) + f(y_{n+1}, t_{n+1})} 2`.
|
||||
Plugging in and integrating yields the improved `y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_{n + 1})} 2`.
|
||||
</p>
|
||||
<p>
|
||||
There's one problem: we have to know `y_{n + 1}` to find `y_{n + 1}`. This isn't actually insurmountable if the functions involved are simple enough, but
|
||||
it's much easier to just assume that `y_{n + 1} = y_n + h f(y_n, t_n)`. Note that this is Euler's formula! Substituting this in gives us
|
||||
`y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_n + h f(t_n, y_n))} 2`. This Improved Euler formula is called <i>Heun's Formula</i>.
|
||||
</p>
|
||||
<p>
|
||||
Euler and its improved variant are considered to be part of the <i>Runge-Katta</i> class of techniques for approximating the result of a differential equation.
|
||||
I'm not going to get into generalizing the Runge-Katta methods here; it's interesting, but not strictly relevant.
|
||||
</p>
|
||||
[/]
|
||||
[=title "Differential Equations Week 10"]
|
||||
[=author "Tyler Clarke"]
|
||||
[=date "2025-7-21"]
|
||||
[=subject "Calculus"]
|
||||
[#post.html]
|
||||
Reference in New Issue
Block a user