diff --git a/site/posts/differential-week-10.html b/site/posts/differential-week-10.html new file mode 100644 index 0000000..3865e1b --- /dev/null +++ b/site/posts/differential-week-10.html @@ -0,0 +1,71 @@ +[!] +[=post-] +
+ We're getting pretty close to the end! The final exam, on the 31st, is less than two weeks away. Light at the end of the tunnel... +
++ This is by far the simplest way to numerically calculate any first-order differential equation in the form `frac {dy} {dx} = f(x, y)`. It has the added benefit that it scales to + essentially arbitrary accuracy, and can be performed trivially by computers. The idea is very simple: pick a value `h` to be your step size (smaller = more accurate), + and take the sum of the differential equation at every step in your target range. VoilĂ , the solution! Mathematically: given `y(x)`, `y(x + h) = y(x) + h frac {dy} {dx} (x, y)` within + some margin of error. This allows you to find the value of `y` at any arbitrary `x` given an initial value and a differential equation, without solving the differential equation! +
++ To keep track of the process, Euler's method is usually used in a table like so: +
+| `n` | +`x` | +`y` | +`frac {dy} {dx}` | +
| `0` | +`x + 0h` | +`y(0) + h frac {dy} {dx} (x + 0h, y(x))` | +`frac {dy} {dx}` | +
+ With an added row for each step. This is tedious and not particularly interesting, so I'm not going to do an example. +
++ Euler's method has two variants: the explicit, or forward, method we've covered, and the implicit, or backward, method. The implicit method is conceptually + and practically more difficult, but is still useful: the explicit method will usually overestimate for a curve opening down, while the implicit method will underestimate. + It is relatively stable, especially for large `n`. +
++ Implicit Euler can find `y_{n + 1}` simply by solving the equation `y_{n + 1} - h y_{n + 1} (1 - frac {y_{n + 1}} {Y_0}) - y_n = 0`. This is extremely inconvenient. Note that, if we let + `y_{n + 1} = q` and expand, we get `q^2 frac {h} {Y_0} + (1 - h)q - y_n = 0`, which is a solvable quadratic. +
++ Luckily, 8.2 won't be on the exam. +
++ Euler goes further! We already know that, given `frac {dy} {dt} = f(y, t)`, `y_{n + 1} = y_n + h f(y_n, t_n)`: this is actually a specific case of `y_{n + 1} = y_n + int_{t_n}^{t_{n + 1}} f(y(t), t) dt`, + using the approximation `f(y(t), t) = f(y_n, t_n)`. + This integral equation can't be solved without an approximation, but `f(y_n, t_n)` is just one of several possibilities. It consistently overshoots or undershoots any nonstraight curve! + A better way to do this is to assume that the average derivative over a step is close to the average of the derivative at the start and at the end: `frac {f(y_n, t_n) + f(y_{n+1}, t_{n+1})} 2`. + Plugging in and integrating yields the improved `y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_{n + 1})} 2`. +
++ There's one problem: we have to know `y_{n + 1}` to find `y_{n + 1}`. This isn't actually insurmountable if the functions involved are simple enough, but + it's much easier to just assume that `y_{n + 1} = y_n + h f(y_n, t_n)`. Note that this is Euler's formula! Substituting this in gives us + `y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_n + h f(t_n, y_n))} 2`. This Improved Euler formula is called Heun's Formula. +
++ Euler and its improved variant are considered to be part of the Runge-Katta class of techniques for approximating the result of a differential equation. + I'm not going to get into generalizing the Runge-Katta methods here; it's interesting, but not strictly relevant. +
+[/] +[=title "Differential Equations Week 10"] +[=author "Tyler Clarke"] +[=date "2025-7-21"] +[=subject "Calculus"] +[#post.html] \ No newline at end of file