This commit is contained in:
223
site/posts/differential-exam-1.html
Normal file
223
site/posts/differential-exam-1.html
Normal file
@@ -0,0 +1,223 @@
|
||||
[!]
|
||||
[=post-]
|
||||
<p>
|
||||
Hello once more, Internet! Our little Diffy Q adventuring party has passed the first two random encounters, and Rivendell is close in sight - but a ferocious
|
||||
band of orcs lies in the way; our first midterm rapidly approaches. It's going to be on June 9th in the normal lecture room and normal time; don't be late!
|
||||
</p>
|
||||
<p>
|
||||
There's quite a bit of material (everything we've learned so far!), and I can't possibly hope to cover everything in exhaustive detail, but this review
|
||||
should go through all of the major points and at least give you some idea what you need to study further. Note that there won't be any material from
|
||||
this week; the highest section on the exam is #3.5.
|
||||
</p>
|
||||
<p>
|
||||
<b>Note:</b> the sample assessments are <i>not</i> sufficient to study! They contain some important ommissions (2.7 especially) for which we'll have to turn
|
||||
to the textbook and worksheets.
|
||||
</p>
|
||||
<p>
|
||||
This is going to be structured a bit differently from the quiz reviews: major sections will be contiguous spans of densely related content, and there will be at least two or three
|
||||
questions covered in each major section. I'll boldface the question sources for some semblance of readability.
|
||||
</p>
|
||||
<h2>2.1 -> 2.3: Simple Solutions and Modeling</h2>
|
||||
<p>
|
||||
Note: while some material from chapter 1 is on the exam, it's frankly too basic to merit inclusion here.
|
||||
</p>
|
||||
<p>
|
||||
This first chunk of chapter 2 talks primarily about the basic ways to solve ODEs: the variable-separable method and the linear integrating factor
|
||||
method. In brief, variable-separable method allows you to quickly solve any equation that can be rewritten in the form `f(y) dy = q(t) dt`, simply by integrating
|
||||
both sides. Linear integrating factors allow you to quickly solve any equation that can be rewritten in the form `y' + p(t) y = q(t)` by
|
||||
taking `mu = e^{int p(t) dt}` and rewriting as `frac d {dt} (mu y) = mu q(t)`. Integrating both sides with respect to `t` will get you an implicit solution with
|
||||
little trouble. This chunk also introduces modeling with differential equations: for instance, analyzing fluid mixing.
|
||||
</p>
|
||||
<p>
|
||||
Let's do some problems. <b>WS2.1.3</b> isn't very hard, but nicely illustrates an important concept in this course: exponent separation. We're given `frac {dy} {dt} = e^{y + t}, y(0) = 0`:
|
||||
if you remember exponent laws, you know this can be separated to get `frac {dy} {dt} = e^y e^t`, which can then be rearranged to get `e^{-y} dy = e^t dt`.
|
||||
Integrating both sides gives us a relatively trivial solution `-e^{-y} = e^t + C`. We can actually find `C` because we have an initial value `y(0) = 0`:
|
||||
substituting in `0` for both `y` and `t` gives us the equation `-e^0 = e^0 + C`, or `-1 = 1 + C`, or `C = -2`. Hence, our final answer is `-e^{-y} = e^t - 2`.
|
||||
</p>
|
||||
<p>
|
||||
<b>WS2.2.3</b> asks us to solve an initial-value problem: `t frac {dy} {dx} + (t + 1)y = t, y(ln 2) = 1` for `t > 0`. This equation is an LDE (Linear Differential Equation):
|
||||
if we divide the whole thing by `t`, we get `frac {dy} {dx} + frac {t + 1} {t} y = 1`, which is in exactly the right form with `p(t) = frac {t + 1} t` and
|
||||
`q(t) = 1`. The integrating factor `mu = e^{int frac {t + 1} t dt}`, which works out to `e^{t + ln|t| + C}`. It's much more convenient to rewrite this
|
||||
using exponent rules, which gives us `Cte^t` (note: because `C` is an arbitrary constant, we can redefine it as `C = e^C_1`, which makes this prettier)
|
||||
we can rewrite this in the normal IF form to get `frac d {dt} (y t C e^t) = Cte^t`. Dividing both sides by `C` and integrating both sides with respect to `t` gives us
|
||||
`y t e^t = te^t - e^t + C`. We can do a lil algebra to get `y t = t - 1 + Ce^{-t}`. Finally, we can substitute the initial value to get
|
||||
`ln 2 = ln 2 - 1 + Ce^{-ln2}`, and do some algebra to get `2 = C`. Hence: `y t = t - 2 + 2e^{-t}`.
|
||||
</p>
|
||||
<p>
|
||||
We can use these basic techniques to solve some real-world problems. For instance, <b>WS2.3.3</b> gives us a relationship `frac {dp} {dt} = rp` describing
|
||||
the behavior of a population of bacteria, where `p` is population, `t` is time, and `r` is an arbitrary rate constant. We need to determine the rate
|
||||
constant knowing that `p` doubles in 12 days, and solve with an initial value `p(0) = 200` to find the population at 18 days.
|
||||
</p>
|
||||
<p>
|
||||
Step one is, of course, to solve. Fortunately, it's variable-separable: we can rewrite as `frac 1 p dp = r dt`, and integrate to get `ln|p| = rt + C`.
|
||||
Raising `e` to both sides gives us `p = Ce^{rt}` - because `C` is an arbitrary constant, we can move it out safely. The first part of the problem is tricky:
|
||||
let's say `p_0` is our unknown starting population, `t_0` is our unknown starting time, and `p_0 = Ce^{rt_0}`. The constraint is that
|
||||
`2p_0 = Ce^{rt_0 + 12r}`, which separates out nicely to `2p_0 = Ce^{rt_0}e^{12r}`. Because we know `p_0 = Ce^{rt_0}`, we can divide that
|
||||
from both sides to get `2 = e^{12r}`, which solves to `r = frac {ln(2)} {12}`. Knowing this, we can substitute for the IVP:
|
||||
`200 = Ce^{frac {ln(2)} {12} * 0}`, which reduces <i>really nicely</i> to `C = 200`. That's the last unknown! Now we can
|
||||
find the population at `t=18`: `p(18) = 200e^{frac {ln(2)} {12} 18}`, `p(18) = 200 * 2^{frac {3} {2}}`, or `p(18) = 400sqrt(2)`.
|
||||
</p>
|
||||
<p>
|
||||
If you want a much deeper Deadly Boring look at the concepts discussed here, check out the first few Differential Equations posts under Calculus.
|
||||
</p>
|
||||
<h2>Briefly: 2.4</h2>
|
||||
<p>
|
||||
This one kinda stands alone. The main topic is existence and uniqueness. This is simplest for an LDE: given an equation `frac {dy} {dt} + p(t)y = q(t)`,
|
||||
there is guaranteed to be a unique solution in intervals where `p(t)` and `q(t)` are defined. For nonlinear equations, it's only possible to find the
|
||||
<i>interval</i> of existence by solving it, but you can find whether or not a solution is guaranteed to exist by checking that, for a nonlinear ODE
|
||||
`y' = f(t, y)`, both `frac {df} {dy}` and `f` are continuous at the initial point. Using those intervals, you can also find the maximum possible bounds
|
||||
for an actual interval of existence: the interval of existence of your IVP is guaranteed to fall somewhere within those bounds, although it's not possible
|
||||
to say <i>where</i>.
|
||||
</p>
|
||||
<p>
|
||||
<b>WS2.4.1 (b)</b> asks us to find the interval of existence of `y' + tan(t) y = sin(t), y(pi) = 0`. This is clearly an LDE with `p(t) = tan(t)` and `q(t) = sin(t)`:
|
||||
because `sin(t)` is defined for all `t`, we can ignore it; `tan(t)` is only defined when `cos(t)` is nonzero. `cos(t)` is zero when `t = frac {pi} 2` and
|
||||
`t = frac {3pi} 2`, so there must be a solution between those: hence, our interval of existence is `frac {pi} 2 < t < frac {3pi} 2`. Note that it
|
||||
<i>might</i> exist elsewhere, even at `frac {pi} 2` and `frac {3pi} 2`, but we can't make any guarantees about that.
|
||||
</p>
|
||||
<p>
|
||||
<b>SA17</b> asks us to determine the interval of existence of `frac {dy} {dt} = e^{y + t}, y(0) = 0`. This is not linear, so we won't be able to find an exact
|
||||
interval of existence. If we let `f = e^{y + t}`, and thus `frac {df} {dy} = e^{y + t}` (note that `f` is its own derivative!). This exists for every possible value of
|
||||
`t`, so we can't get any more specific than `-oo < t < oo`. We can get a better answer by variable-separating and solving: `e^{-y} dy = e^t dt`, so
|
||||
`-e^{-y} = e^t + C`, so `y = -ln(-e^t + C)` (no, I did not miss a sign here - because `C` is an arbitrary constant, it eats the negative). `ln(x)` is only defined
|
||||
for `x > 0`, so this is only defined for `C - e^t > 0`. Because this is an IVP, we can actually find a solution for `C`: `0 = -ln|C - 1|`, or `1 = C - 1`, `C = 2`.
|
||||
Thus, `2 - e^t > 0`, `e^t < 2`, `t < ln(2)`. Our exact interval of existence, then, is `-oo < t < ln(2)`.
|
||||
</p>
|
||||
<h2>2.7: Substitutions</h2>
|
||||
<p>
|
||||
As there is no worksheet 2.5 from which to draw questions, I've decided to <b>skip it</b> here. This doesn't mean it won't be on the test; make sure to learn the material!
|
||||
2.6 is unfortunately not included on the test, which is a shame, because it's <i>really cool</i>. 2.7 is, as the title suggests, mostly concerned with substitutions:
|
||||
essentially, some types of equations can be made much simpler by defining some invertible function `v(x, y)` and finding `y(v, x)` to substitute into the equation.
|
||||
It sounds fancy and complicated, but it's really quite simple. The simplest substitutions are <i>homogeneous</i> substitutions, where `v = frac y x`, and <i>Bernoulli</i>
|
||||
substitutions, which are a bit more situational.
|
||||
</p>
|
||||
<p>
|
||||
<b>WS2.7.1</b> is a nice example of homogeneous substitution. Given a differential equation in the form
|
||||
`f(x, y) + g(x, y) frac {dy} {dx} = 0` where `f` and `g` are both homogeneous of the same degree (in brief:
|
||||
a function is homogeneous of degree `n` if `f(x lambda, y lambda) = lambda^n f(x, y)`), we define the substitution `v = frac y x`, and accordingly
|
||||
`y = vx`. We need to solve `frac {dy} {dx} = frac {x + 3y} {3x + y}`, which is homogeneous of degree 1 (if you don't believe me, do the algebra! It can be rewritten
|
||||
in homogeneous form pretty easily.)
|
||||
Substituting `v` for `y` gives us `frac {dy} {dx} = frac {x + 3vx} {3x + vx}`. We still have a y term - fortunately, we can do some magic
|
||||
to find that `frac {dy} {dx} = v + x frac {dv} {dx}` by product rule, and substitute this. `v + x frac {dv} {dx} = frac {x + 3vx} {3x + vx}`.
|
||||
We can simplify to `v + x frac {dv} {dx} = frac {3v + 1} {3 + v}`. We have to perform a bit of legerdemain to turn this into something that can be solved:
|
||||
</p>
|
||||
<ol>
|
||||
<li>`x frac {dv} {dx} = frac {3v + 1} {3 + v} - v`</li>
|
||||
<li>`x frac {dv} {dx} = frac {3v + 1 - 3v - v^2} {3 + v}`</li>
|
||||
<li>`x frac {dv} {dx} = frac {1 - v^2} {3 + v}`</li>
|
||||
<li>`x frac {dv} {dx} = frac {(1 - v)(1 + v)} {3 + v}`</li>
|
||||
<li>`frac {3 + v} {(1 - v)(1 + v)} frac {dv} {dx} = frac 1 x`</li>
|
||||
<li>`frac {3 + v} {(1 - v)(1 + v)} dv = frac 1 x dx`</li>
|
||||
</ol>
|
||||
<p>
|
||||
We variable-separated! Integrating requires some partial fraction decomposition. If you aren't familiar with PFD, you should practice it;
|
||||
I'm not going to cover it here. This becomes `frac 1 {1 + v} + frac 2 {1 - v} dv = frac 1 x dx`. We can integrate this to get
|
||||
`ln|1 + v| - 2ln|1 - v| = ln|x| + C`. Raising `e` to both sides and simplifying gives us `Cx = frac {1 + v} {(1 - v)^2}`. Finally, we
|
||||
resubstitute: `Cx = frac {1 + frac y x} {(1 - frac y x)^2}`. This is only an implicit solution, but it's Good Enough ™; I really, really
|
||||
don't want to find an explicit solution.
|
||||
</p>
|
||||
<p>
|
||||
There is another type of substitution we'll encounter: the Bernoulli substitution. A Bernoulli differential equation is any differential equation
|
||||
in the form `frac {dy} {dx} + p(x)y = q(x)y^n` (note that LDEs are a special case of this when `n=0`). In these cases, you have to first divide
|
||||
the entire equation by `y^n`, then use the substitution `v = y^{1 - n}` to simplify. <b>WS2.7.2</b> contains a nice example of this:
|
||||
we're given `frac {dy} {dt} - frac y t = - frac {y^2} {t^2}` (for `t > 0`). In this case, `p(t)` is clearly `-frac 1 t`, and `q(t)` is clearly `- frac {1} {t^2}`.
|
||||
`n` is `2`. Dividing the whole equation by `-y^2` yields `-y^{-2} frac {dy} {dt} + frac 1 {ty} = frac {1} {t^2}`. We use the substitution `v = y^{-1}`,
|
||||
which conveniently gives us `frac {dv} {dt} = - frac {dy} {dt} y^{-2}`. <i>Ain't that convenient?</i>
|
||||
</p>
|
||||
<p>
|
||||
Plugging in these substitutions gives us the very nice `frac {dv} {dt} + frac v t = frac {1} {t^2}`. It's linear, folks! I won't bore you with the details
|
||||
of the linear solution; our answer is `v = frac {ln(Ct)} t`. Resubstituting gives us `y = frac t {ln(Ct)}` (note that I simplified).
|
||||
</p>
|
||||
<h2>3.2 -> 3.3: Systems of Linear Differential Equations</h2>
|
||||
<p>
|
||||
SLDEs are an extension of the linear systems of equations everyone who's taken 1554 sees in their nightmares. The idea is simple: not unlike the normal homogeneous
|
||||
system `vec x = A vec x` where A is a transformation matrix, we have `vec x' = A vec x`. The derivative of the vector equals some matrix times
|
||||
the vector. This has a number of interesting implications. The general way to solve these is by eigenvalues: for a system `vec X' = A vec X`, where
|
||||
`A` has real and different eigenvalues `lambda_1` and `lambda_2` with corresponding eigenvectors `vec v_1` and `vec v_2`, our general solution is
|
||||
`X = c_1 e^{lambda_1 t} vec v_1 + c_2 e^{lambda_2 t} vec v_2`, where `c_1` and `c_2` are arbitrary constant multipliers.
|
||||
</p>
|
||||
<p>
|
||||
SLDEs have many useful properties: they can be rewritten as matrices, of course, but they can also be used to represent a higher-order equation
|
||||
as a system of lower-order equations with a nice substitution.
|
||||
</p>
|
||||
<p>
|
||||
<b>WS3.2.1 (a)</b> is a pretty nice classification problem. We're asked to write the system `x' = -x + ty, y' = tx - y` in matrix form, and classify it
|
||||
as homogeneous and/or autonomous. To do this, we let `vec X = \[x, y]`, and thus `vec X' = \[x', y']`, and rewrite as `X' = \[\[-1, t], \[t, -1]] X`.
|
||||
If you aren't convinced, do the multiplication - you'll get back the original system. This is clearly <i>non-autonomous</i> because there is a `t` term
|
||||
on the right side; it is also clearly <i>homogeneous</i> because there is no constant term.
|
||||
</p>
|
||||
<p>
|
||||
<b>WS3.2.2</b> provides a pretty good example of rewriting higher-order equations via a substitution. We're asked to write `u'' - 2u' + u = sin(t)` as an
|
||||
SLDE. In this case, we use the substitution `y = u'` and `x = u`, so `y' = u''` and `x' = u'`. Note also that `y = x'`. We can substitute these to get
|
||||
`y' - 2y + x = sin(t)`. This is first-order! It's also incomplete - we don't know anything about `x` and `y`. We want to do entirely without `u`, so we
|
||||
just apply the constraint `y = x'`: now it's a complete system. We can even write it in matrix form: algebra gives us `x' = y, y' = 2y - x + sin(t)`,
|
||||
so if we let `vec X = \[x, y]`, we have `vec X' = \[\[0, 1], \[-1, 2]] vec X + \[0, sin(t)]`. Note the constant term `\[0, sin(t)]` means this is
|
||||
nonhomogeneous, and because it contains `t`, the equation is also nonautonomous!
|
||||
</p>
|
||||
<p>
|
||||
<b>WS3.3.1</b> introduces the problem of actually solving SLDEs in matrix form. Part <b>(a)</b> asks us to find the general solution to
|
||||
`vec X' = \[\[1, 1], \[4, -2]] vec X`: recall that the general form of the solution is `vec X = c_1 e^{lambda_1 t} vec v_1 + c_2 e^{lambda_2 t} vec v_2`.
|
||||
To find the eigenvalues, I prefer the characteristic polynomial method, but you can do whatever you like: they're `lambda_1 = 2` and `lambda_2 = -3`.
|
||||
Finding the corresponding eigenvectors is not terribly challenging either; you should get `vec v_1 = \[1, 1]` and `vec v_2 = \[-1, 4]`. We can just
|
||||
plug these in to get `vec X = c_1 e^{2t} \[1, 1] + c_2 e^{-3t} \[-1, 4]`. Not particularly difficult.
|
||||
</p>
|
||||
<p>
|
||||
My general aversion to graphing means I'm not going to try to graph any phase portraits here. Keep in mind that they will probably be on the test!
|
||||
</p>
|
||||
<h2>3.4 -> 3.5: Complexity</h2>
|
||||
<p>
|
||||
Of course, things don't always work out nicely. Real and different eigenvalues are just a special case of eigenvalues in general, which can be repeated
|
||||
or complex (note that, for 2x2 matrices, they <i>cannot</i> be both).
|
||||
</p>
|
||||
<p>
|
||||
When your eigenvalues are complex, you'll need to use Euler's identity: `e^{i theta} = cos(theta) + i sin(theta)`. Use this to reduce the solution for
|
||||
a <i>single</i> eigenvalue (you don't need to find both), and you'll be able to do algebraic munging to get something in the form `vec v + i vec r`.
|
||||
If you've done it correctly, `vec v` and `vec r` form a linearly independent basis for the solution - so you can rewrite as `c_1 vec v + c_2 vec r`, where
|
||||
`c_2` implicitly eats the `i` term. This can actually be graphed, unlike the complex version.
|
||||
</p>
|
||||
<p>
|
||||
That might sound complicated, but it really isn't too bad. Let's do an example. <b>WS3.4.1</b> asks us to find a general solution for `vec X' = \[\[1, 2], \[-5, 1]] vec X`.
|
||||
The characteristic polynomial method very quickly yields `lambda = 1 - i sqrt(10)` (note that the complex conjugate of this is the other eigenvalue, but we don't need to worry about that).
|
||||
The eigenvalue is `vec v = \[i sqrt(10), 5]`. We can substitute this in for the first solution to get `e^{t - isqrt(10)t} \[i sqrt(10), 5]`. This is technically a correct solution,
|
||||
but it's also <i>nasty</i>, and we can't graph it. Let's simplify! Euler's identity quickly gets us to `e^t (cos(-sqrt(10)t) + isin(-sqrt(10)t)) \[isqrt(10), 5]`. Yikes.
|
||||
Fortunately, we have another trick up our sleeve: distribution. `e^t \[isqrt(10) cos(-sqrt(10)t) - sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t) + isin(-sqrt(10)t)]`.
|
||||
This separates out into `e^t(\[-sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t)] + i \[sqrt(10) cos(-sqrt(10)t), sin(-sqrt(10)t)])`. The vectors are linearly independent, so
|
||||
we can finally turn this into a full solution: `vec X = c_1 e^t \[-sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t)] + c_2 e^t \[sqrt(10) cos(-sqrt(10)t), sin(-sqrt(10)t)]`.
|
||||
A real valued solution! Note that the graph of this is a spiral growing outwards from the origin.
|
||||
</p>
|
||||
<p>
|
||||
When eigenvalues are repeated, our story gets much worse. There is still a very easy solution if you can find two linearly independent eigen<i>vectors</i>,
|
||||
in which case the normal solution applies; however, if you cannot find two linearly independent eigenvectors, there is a trick to find a different solution:
|
||||
if you have the eigenvalue/vector pair `lambda, vec v`, your second solution is `te^{lambda t} vec v + e^{lambda t} vec w`, where `w` is a solution to the
|
||||
linear equation `(A - lambda I) vec w = v`. If this sounds complicated, that's probably because it is.
|
||||
</p>
|
||||
<p>
|
||||
Let's do an example. <b>WS3.5.1</b> asks us to solve `vec X' = \[\[3, -4], \[1, -1]] vec X`. This has a repeated eigenvalue `lambda = 1`, and a repeated eigenvector
|
||||
`vec v = \[2, 1]`, so our first general solution is `X = c_1 e^{t} \[2, 1]`. We need another one. To find that, we solve the equation
|
||||
`\[\[2, -4], \[1, -2]] vec w = \[2, 1]`. This is easy to solve with Gaussian elimination to get `vec w = \[3, 1]`. Thus, our second solution is the disgusting
|
||||
`te^t \[2, 1] + e^t \[3, 1]`. This means our final general solution is `X = c_1 e^t \[2, 1] + c_2 (te^t \[2, 1] + e^t \[3, 1])`.
|
||||
</p>
|
||||
<h2>Final Notes</h2>
|
||||
<p>
|
||||
That's everything! Note that I covered all the material, but I did so <i>incredibly</i> briefly; be sure to read the previous weekly reviews and quiz reviews
|
||||
for a more detailed look.
|
||||
</p>
|
||||
<p>
|
||||
Aside from all of the equations and identities here, you'll also need to know some basic tricks:
|
||||
</p>
|
||||
<ul>
|
||||
<li>IBP/U-Sub</li>
|
||||
<li>Partial Fraction Decomposition</li>
|
||||
<li>Trig integrals (especially arctangent)</li>
|
||||
<li>Logarithm and exponent laws</li>
|
||||
<li>And more... hopefully not</li>
|
||||
</ul>
|
||||
<p>
|
||||
Good luck, and don't forget your balloon hats!
|
||||
</p>
|
||||
[/]
|
||||
[=author "Tyler Clarke"]
|
||||
[=date "2025-6-6"]
|
||||
[=subject "Calculus"]
|
||||
[=title "Differential Equations Exam 1 Review"]
|
||||
[#post.html]
|
||||
@@ -1,5 +1,8 @@
|
||||
[!]
|
||||
[=post-]
|
||||
<p>
|
||||
<i>This post is part of a series; you can view the next post <a href="[^baseurl]/posts/differential-exam-1.html">here</a>.</i>
|
||||
</p>
|
||||
<p>
|
||||
Welcome once more to Deadly Boring Math! With the tempestuous wight of Quiz 2 rapidly striding towards us (it's <i>tomorrow</i> in the usual studio time),
|
||||
I'm doing some last-minute studying, and figured I'd post some worked solutions here. These are by no means exhaustive; if you don't do your own studying,
|
||||
@@ -30,7 +33,7 @@
|
||||
<p>
|
||||
The first step is to define some substitutions: `x = u`, `y = u'`, meaning `x' = u'`, and `y' = u''`. Note also that `x' = y`.
|
||||
Substituting these values into the equation gives us `y' - 2y + x = sin(t)`: because we have the constraint `x' = y`, this is
|
||||
a system of equations. We do some algebra to get `y' = 2y - x + sin(t), x' = y`, which can be written in matrix form as `X' = \[\[0, 1], \[-1, 2]] X + [0, sin(t)]`.
|
||||
a system of equations. We do some algebra to get `y' = 2y - x + sin(t), x' = y`, which can be written in matrix form as `X' = \[\[0, 1], \[-1, 2]] X + \[0, sin(t)]`.
|
||||
</p>
|
||||
<h2>WS3.3.1</h2>
|
||||
<p>
|
||||
@@ -63,4 +66,5 @@
|
||||
[=title "Differential Quiz 2"]
|
||||
[=subject "Calculus"]
|
||||
[=author "Tyler Clarke"]
|
||||
[=date "2025-6-2"]
|
||||
[=date "2025-6-2"]
|
||||
[#post.html]
|
||||
Reference in New Issue
Block a user