\( \newcommand\D{\mathrm{d}} \newcommand\E{\mathrm{e}} \newcommand\I{\mathrm{i}} \newcommand\bigOh{\mathcal{O}} \newcommand{\cat}[1]{\mathbf{#1}} \newcommand\curl{\vec{\nabla}\times} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\RR}{\mathbb{R}} \newcommand{\ZZ}{\mathbb{Z}} \)
UP | HOME

Partial Differential Equations

Table of Contents

1. Introduction

The fundamental difference between partial differential equations and ordinary differential equations is the number of variables we're working with. For ODEs, we're usually working with functions of time (or some other single parameter). Partial differential equations use multiple independent variables (e.g., functions of space and time).

One strategy to solve problems like

\begin{equation} \sum^{n}_{\mu=0}a^{\mu}\partial_{\mu}f = 0 \end{equation}

is to rewrite this using a "linear operator"

\begin{equation} \sum^{n}_{\mu=0}a^{\mu}\partial_{\mu} = L \end{equation}

then our problem becomes

\begin{equation} L(f) = 0. \end{equation}

This amounts to finding the kernel of \(L\).

Recall an operator \(L\) is linear if, for arbitrary constants \(c_{1}\) and \(c_{2}\), we have

\begin{equation} L(c_{1}\mu_{1} + c_{2}\mu_{2}) = c_{1}L(\mu_{1}) + c_{2}L(\mu_{2}). \end{equation}

The transport equation in 2-dimensions looks like

\begin{equation} (\partial_{x} + \partial_{y})\mu(x,y) = 0. \end{equation}

This is indeed a linear operator.

One example of a nonlinear PDE is something like the \(\phi^{4}\) model in quantum field theory

\begin{equation} (\partial_{t}^{2}-\partial_{x}^{2})\phi + \phi^{3} = 0. \end{equation}

This isn't linear, but how can we see this? Suppose we had two solutions \(\phi\) and \(\psi\), then their sum equals

\begin{equation} (\partial_{t}^{2}-\partial_{x}^{2})(\phi + \psi) + (\phi + \psi)^{3} = (\psi + \phi)^{3} - \phi^{3} - \psi^{3}. \end{equation}

This is because only the cross terms in the cubic term remain after plugging in the RHS of the PDE.

We belabor this point of linearity because, if we have a linear PDE described by a linear differential operator \(L(-)\), and we have any two solutions \(\mu_{1}\) and \(\mu_{2}\), then we can construct infinitely many solutions as a linear combination of these two \(c_{1}\mu_{1}+c_{2}\mu_{2}\) for any real number \(c_{1}\), \(c_{2}\).

1.1. Grocery List of Examples

We've seen the transport equation and the \(\phi^{4}\) field theory. Let's just enumerate some that we'd encounter in physics.

The shock wave

\begin{equation} \partial_{t}\mu + \mu\partial_{x}\mu = 0. \end{equation}

Laplace's equation

\begin{equation} \partial_{x}^{2}\mu + \partial_{y}^{2}\mu = 0. \end{equation}

The wave equation

\begin{equation} \partial_{t}^{2}\mu - \partial_{x}^{2}\mu = 0. \end{equation}

We could add some self-interaction contribution, for example including a cubic self-interaction gives us:

\begin{equation} (\partial_{t}^{2} - \partial_{x}^{2})\mu + \mu^{3} = 0. \end{equation}

The dispersive Wave equation

\begin{equation} \partial_{t}\mu + \mu\partial_{x}\mu + \partial_{x}^{3}\mu = 0. \end{equation}

The Schrodinger equation

\begin{equation} \partial_{t}\mu -i\partial_{x}^{2}\mu = 0. \end{equation}

The order of a partial differential equation is the highest derivative's order. For mixed derivatives like \(\partial_{x}^{m}\partial_{y}^{n}\), we sum their order to get \(m+n\).

If \(L_{1}(-)\) is a differential operator of order \(m\), and \(L_{2}(-)\) a differential operator of order \(n\), then \(L_{1}L_{2}\) is a differential operator of order \(m+n\).

We call a partial differential equation Linear if any linear combination of its solutions is again a solution.

Let \(L(f)=g(x,y,\dots,t)\) be a partial differential equation. Then we call it Homogeneous if \(g=0\). Otherwise we call it Inhomogeneous.

Consider the partial differential equation

\begin{equation} \partial_{t}^{2}\mu - \partial_{x}^{2}\mu + x^{4} = 0. \end{equation}

It is linear and inhomogeneous.

Consider the shockwave equation. We see we can rewrite it using the chain rule as

\begin{equation} \partial_{t}\mu + \frac{1}{2}\partial_{x}(\mu^{2}) = 0. \end{equation}

If we introduce

\begin{equation} \vartheta(\mu) = \mu^{2} \end{equation}

then we can write

\begin{equation} L = \partial_{t} + \frac{1}{2}\partial_{x}\circ\vartheta \end{equation}

to render our equation

\begin{equation} L(\mu) = 0. \end{equation}

But that \(\vartheta\) makes it a nonlinear PDE.

2. ODEs and PDEs revisited

2.1. Solving an ODE

Suppose we had the ODE

\begin{equation} \frac{\D^{2}\mu(x)}{\D x^{2}} = 0. \end{equation}

We can integrate it once. The left-hand side becomes

\begin{equation} \int LHS\,\D x = \int\frac{\D}{\D x}\left(\frac{\D\mu(x)}{\D x}\right)\D x = \frac{\D\mu(x)}{\D x} + c_{1}. \end{equation}

The right-hand side

\begin{equation} \int RHS\,\D x = \int 0\,\D x = 0. \end{equation}

Setting the two equal gives us

\begin{equation} \frac{\D\mu(x)}{\D x} + c_{1} = 0. \end{equation}

Integrating again with respect to \(x\) gives us

\begin{equation} \int LHS\,\D x = \int \frac{\D}{\D x}(\mu(x))\,\D x + \int c_{1}\,\D x = \mu(x) + c_{0} + c_{1}x \end{equation}

The right-hand side again vanishes. The constants of integration \(c_{0}\) and \(c_{1}\) encode the initial conditions:

\begin{equation} \mu(0) = -c_{0} \end{equation}

and

\begin{equation} \mu'(0) = -c_{1}. \end{equation}

2.2. For PDEs

If we try to solve a partial differential equation in this manner, well, let's just make \(\mu\) a function of two variables. So solving

\begin{equation} \partial_{x}^{2}\mu(x,y) = 0 \end{equation}

using the same strategy. The first integral with respect to \(x\) gives us

\begin{equation} \partial_{x}\mu(x,y) + c_{1}(y) = 0 \end{equation}

and the second integral gives us

\begin{equation} \mu(x,y) + c_{0}(y) + c_{1}(y)\cdot x = 0. \end{equation}

Then we have

\begin{equation} \mu(x,y) = -c_{0}(y) - c_{1}(y)\cdot x \end{equation}

where \(\mu(0,y) = \mu_{0}(y) = -c_{0}(y)\) is one initial condition, so

\begin{equation} \mu(x,y) = \mu_{0}(y) - c_{1}(y)\cdot x. \end{equation}

Compare this to the general solution to the second-order ODE.

2.3. Example with mixed derivatives

Consider the PDE

\begin{equation} \partial_{x}\partial_{y}\mu(x,y) = 0. \end{equation}

We assume the function \(\partial_{x}\partial_{y}\mu(x,y)\) is continuous, so we could ostensibly swap the order of differentiation. We integrate the left-hand side of our PDE with respect to \(x\) to get

\begin{equation} \int\partial_{x}\bigl(\partial_{y}\mu(x,y)\bigr)\D x = \partial_{y}\mu(x,y) + f(y). \end{equation}

Integrating the right-hand side of our PDE gives us zero, so we have the first-integral yield:

\begin{equation} \partial_{y}\mu(x,y) + f(y) = 0. \end{equation}

We can integrate with respect to \(y\), rearranging terms, to find

\begin{equation} \mu(x,y) = g(x) - \int f(y)\,\D y. \end{equation}

Here \(g(x)\) is the "integration constant" with respect to \(y\).

3. Geometric and Coordinate Perspectives

We will start solving simple partial differential equations geometrically. Starting with simple first-order PDEs to demonstrate the method, it generalizes to more complicated first-order PDEs.

Remember how we write vectors in calculus using the unit-vectors

\begin{equation} \vec{v} = v_{1}\mathbf{i} + v_{2}\mathbf{j} + v_{3}\mathbf{k}. \end{equation}

We can pretend that partial derivatives are unit vectors, writing instead

\begin{equation} \vec{v} = v_{1}\partial_{x} + v_{2}\partial_{y} + v_{3}\partial_{z}. \end{equation}

This is justified by appealing to differential geometry, where this is in fact a tangent vector.

In vector calculus, we have directional derivatives of scalar functions given by \(\vec{v}\cdot\vec{\nabla} f\). This is also written as \(\partial_{\vec{v}}f\) and is coordinate independent.

We frequently use Einstein notation to track sums for us. Instead of using subscripts, we could write

\begin{equation} \vec{v} = v^{1}\partial_{1} + v^{2}\partial_{2} + v^{3}\partial_{3}. \end{equation}

Then permitting us to contract upper indices with any matching lower indices

\begin{equation} v^{\mu}\partial_{\mu} := \sum_{\mu=1}^{3}v^{\mu}\partial_{\mu}. \end{equation}

This comes in handy. But be careful: we sum over contracted indices, when there's an index "upstairs" matching an index "downstairs". If both indices are "downstairs", there's no summation in Einstein convention.

One quirk of Einstein convention, when we sum over a contracted index, we refer to that variable as a "dummy index". We can relabel it to any fresh variable

\begin{equation} v^{\mu}\partial_{\mu} = v^{\alpha}\partial_{\alpha} \end{equation}

because both sides are the same sums. We just have to be careful to relabel all instances of the dummy variable.

The other quirk is, if \(x^{\mu}\) is a coordinate vector, then the partial derivative with respect to it (i.e., the gradient) is a lower-indexed quantity

\begin{equation} \frac{\partial}{\partial x^{\mu}} = \partial_{\mu}. \end{equation}

When we change coordinates to \(y^{\nu}=y^{\nu}(x^{1},\dots, x^{n})\), we see the chain rule requires us

\begin{equation} \frac{\partial}{\partial y^{\nu}} = \frac{\partial x^{\mu}}{\partial y^{\nu}}\frac{\partial}{\partial x^{\mu}} \end{equation}

multiply by the Jacobian matrix. On the other hand, the coefficients to a vector transforms as

\begin{equation} v^{\mu}\to v^{\mu}\frac{\partial y^{\nu}}{\partial x^{\mu}}, \end{equation}

i.e., by multiplying by the inverse matrix of the Jacobian.

Very rarely in the literature there's a "Euclidean summation convention" discussed where we sum over any repeated index, regardless whether they both live upstairs, downstairs, or mixed. We use Einstein convention.

The quantity

\begin{equation} \vec{v} = v^{\mu}\partial_{\mu} \end{equation}

is coordinate independent.

We can see this by changing coordinates \(x^{\mu}\to y^{\nu}\), the coefficients \(v^{\mu}\) transform as

\begin{equation} v^{\mu}\to\widetilde{v}^{\nu} = \frac{\partial y^{\nu}}{\partial x^{\mu}}v^{\mu}. \end{equation}

The partial derivatives transform as

\begin{equation} \partial_{\mu}\to\widetilde{\partial}_{\nu}=\frac{\partial x^{\mu}}{\partial y^{\nu}}\frac{\partial}{\partial x^{\mu}}. \end{equation}

Changing the dummy variables of the tilde quantities, we find

\begin{equation} v^{\mu}\partial_{\mu} \to \widetilde{v}^{\nu}\widetilde{\partial}_{\nu}=\left(\frac{\partial y^{\nu}}{\partial x^{\alpha}}v^{\alpha}\right)\left(\frac{\partial x^{\beta}}{\partial y^{\nu}}\frac{\partial}{\partial x^{\beta}}\right). \end{equation}

Contracting the \(\nu\) indices on the Jacobian and its inverse gives us the identity matrix (Kronecker delta)

\begin{equation} \left(\frac{\partial y^{\nu}}{\partial x^{\alpha}}v^{\alpha}\right)\left(\frac{\partial x^{\beta}}{\partial y^{\nu}}\frac{\partial}{\partial x^{\beta}}\right). ={\delta_{\alpha}}^{\beta}v^{\alpha}\partial_{\beta}. \end{equation}

This is precisely the same as summing over both \(\alpha\) and \(\beta\), but the Kronecker delta vanishes when \(\alpha\neq\beta\), so

\begin{equation} {\delta_{\alpha}}^{\beta}v^{\alpha}\partial_{\beta} = \sum_{\gamma}v^{\gamma}\partial_{\gamma} \end{equation}

precisely as desired.

3.1. Example: First-Order PDE with Constant Coefficients

Consider the partial differential equations

\begin{equation} a\partial_{x}\mu(x,y) + b\partial_{y}\mu(x,y) = 0 \end{equation}

where \(a\) and \(b\) are real constants. We could introduce

\begin{equation} v^{\alpha}\partial_{\alpha} = a\partial_{x} + b\partial_{y} \end{equation}

so our PDE is

\begin{equation} v^{\alpha}\partial_{\alpha}\mu = (\vec{v}\cdot\vec{\nabla})\mu. \end{equation}

So far, so good, right?

Our first step is to consider a vector orthogonal to \(v^{\alpha}\), i.e., a \(\vec{w}\) such that \(\vec{v}\cdot\vec{w}=0\). One obvious candidate is

\begin{equation} \vec{w} = -b\mathbf{i} + a\mathbf{j}. \end{equation}

So what do we do now? If we write

\begin{equation} \mu(x,y) = f(-v^{2}x + v^{1}y) \end{equation}

for some \(f(-)\), then we see this satisfies our partial differential equation. The reader can verify

\begin{equation} (v^{1}\partial_{1} + v^{2}\partial_{2})f(-v^{2}x + v^{1}y) = 0. \end{equation}

More generally, we find for some \(\vec{w}\) orthogonal to \(\vec{v}\) that the general solution to the PDE may be constructed as

\begin{equation} \mu(x^{1},\dots,x^{n}) = f(\vec{w}\cdot\vec{x}) = f(w^{1}x^{1} + \dots + w^{n}x^{n}) \end{equation}

for arbitrary \(f\), whenever we have \(\mu\colon\mathbb{R}^{n}\to\mathbb{R}\).

The lines described by

\begin{equation} \vec{w}\cdot\vec{x} = \mbox{constant} \end{equation}

are called Characteristic Lines.

So \(c = \vec{w}\cdot\vec{x}\) is a constant. And, ta-duh, remember from geometry that \(f(c) = \mu(\vec{x})\) describes a surface. This is where geometry comes up.

3.2. Coordinate Perspective

When we write

\begin{equation} a\partial_{x}\mu(x,y) + b\partial_{y}\mu(x,y) = \vec{v}\cdot\vec{\nabla}\mu(x,y) = 0 \end{equation}

then we just need to rotate to make \(\vec{v}=(v,0)\). This rotation corresponds to the transformation of coordinates

\begin{equation} x' = ax + by \end{equation}

and

\begin{equation} y' = -bx + ay. \end{equation}

Then our differential operator becomes

\begin{equation} a\partial_{x} + b\partial_{y} = a(a\partial_{x'} + b\partial_{y'}) + b a(b\partial_{x'} - a\partial_{y'}) \end{equation}

which simplifies to

\begin{equation} a\partial_{x} + b\partial_{y} = (a^{2} + b^{2})\partial_{x'}. \end{equation}

Hence we obtain the PDE

\begin{equation} (a^{2} + b^{2})\partial_{x'}\mu(x',y') = 0 \end{equation}

which has its general solution be independent of \(x'\)

\begin{equation} \mu(x',y') = f(y') = f(-bx + ay). \end{equation}

This coincides completely with the geometric approach.

4. Variable Coefficient First-Order PDEs

4.1. Motivating Example

Consider the PDE

\begin{equation} \partial_{x}\mu(x,y) + y\partial_{y}\mu(x,y) = 0. \end{equation}

This corresponds to the directional derivative in the direction of the vector field given by

\begin{equation} \vec{v} = (1, y). \end{equation}

The integral curves associated to this look like

\begin{equation} \frac{\D y}{\D x} = y \implies y(x) = y_{0}e^{x}. \end{equation}

We recover our original PDE from the total derivative

\begin{equation} \frac{\D}{\D x}\mu(x, y(x)) = (\partial_{x} + \frac{\D y}{\D x}\partial_{y})\mu(x,y(x)) = 0. \end{equation}

We find when we set \(x=0\) that \(\mu(0, y_{0})\) is independent of \(x\). So we write from \(y = y_{0}e^{x}\) that \(ye^{-x} = y_{0}\) which is constant with respect to \(x\). Hence

\begin{equation} \frac{\D}{\D x}f(y_{0}) = 0, \end{equation}

and we just insist \(\mu(x,y) = f(y_{0}) = f(ye^{-x})\).

4.1.1. More generally

If we have

\begin{equation} a(x,y)\partial_{x}\mu(x,y) + b(x,y)\partial_{y}\mu(x,y) = c(x,y) \end{equation}

then we could look at the graph of the function \(\mu(x,y)\)

\begin{equation} S := \{(x,y,\mu(x,y))\} \end{equation}

as defining a surface. We really want to construct this surface, because it's the solution to our PDE.

Its normal vector is given by

\begin{equation} \vec{n} = (\partial_{x}\mu(x,y), \partial_{y}\mu(x,y), -1). \end{equation}

Hence our PDE is equivalent to asserting the vector \((a(x,y), b(x,y), c(x,y))\) is orthogonal to the normal vector. Geometry tells us that this vector lies in the tangent plane to a point on the surface.

We start by looking for a curve \(\gamma\colon[0,1]\to S\) which fits the PDE's initial conditions. This is secretly the same as looking for a particular solution of the PDE. The tangent vector for \(\gamma(s)\) is tangent to the surface. So it satisfies the system of equations

\begin{align} \frac{\D x(s)}{\D s} &= a(x(s), y(s))\\ \frac{\D y(s)}{\D s} &= b(x(s), y(s))\\ \frac{\D z(s)}{\D s} &= c(x(s), y(s)) \end{align}

Once we have a solution, we reconstruct the surface \(S\) as a union of these curves. The resulting surface is called the Integral Surface for the vector field.

Integral curves for the vector field \((a(x,y), b(x,y), c(x,y))\) are called Characteristic Curves.

4.2. Steps involved in solving Homogeneous PDEs

To solve

\begin{equation} \vec{a}(\vec{x})\cdot\vec{\nabla}\mu = 0 \end{equation}

we enumerate the steps as follows:

  1. Parametrize the PDE: \(\D x^{i}(s)/\D s = a^{i}(\vec{x}(s))\) for \(i=1,\dots,n\), and \(\D\mu(s)/\D s = 0\)
  2. Parametrize the initial conditions
    • Introduce parameters \(\xi\)
    • Set \(s=0\)
    • This gives us initial conditions for \(\vec{x}(0) = \vec{\xi}\) and \(\mu(0)=\mu_{0}(\vec{\xi})\).
  3. Find the characteristic curves
    • Solve the system of equations from step 1 to find \(\vec{x}(s)\)
    • Substitute in \(\vec\xi\) for \(\vec{x}(0)\), and \(\mu_{0}(\vec\xi)\) for \(u(0)\)
  4. Eliminate the parameters
    • We write the parameters \(\vec\xi\) introduced in step 2 in terms of the independent variables \(\vec{x}\) and dependent variable \(\mu\)
  5. Then we have the solution

For the PDE

\begin{equation} \partial_{t}\mu + e^{x}\partial_{x}\mu = 0 \end{equation}

subject to initial conditions \(\mu(x, 0) = \cosh(x)\).

  1. Parametrize PDE: We have \(\D t/\D s = 1\), \(\D x/\D s = e^{x(s)}\), \(\D\mu/\D s = 0\)
  2. Parametrize initial conditions: at \(s=0\) we have \(t=0\), \(x=\xi\), and \(u=\cosh(\xi)\).
  3. Find the characteristic curves:
    • we find \(\D t/\D s=1\) integrate both sides from \(s'=0\) to \(s'=s\) gives us \(t=s\)
    • \(\D x/\D s = e^{x(s)}\) becomes \(e^{-x}\D x/\D s = 1\) hence integrating both sides gives us \(-e^{-x(s)}+e^{-x(0)}=s\). Substituting in the initial condition \(x(0) = \xi\) and rearranging terms gives us \(e^{-x} = e^{-\xi}-s\)
    • \(d\mu/ds = 0\) gives us the constant value \(\mu = \cosh(\xi)\).
  4. Eliminate the parameters, i.e., solve for \(s\) and \(\xi\) in terms of \(x\) and \(t\):
    • \(s = t\)
    • From \(e^{-x} = e^{-\xi}-s\) and \(s=t\) we have \(e^{-\xi} = e^{-x}+s = e^{-x} + t\). Then the log of both sides \(-\xi = -\xi(x,t) = \log(e^{-x} + t)\).
    • NB: the solutions to \(\xi = \mbox{constant}\) give us the characteristics
  5. Then we have the solution:
    • \(\mu = \cosh(\xi)\) and using our solution for \(\xi\) gives us the solution \(\mu(x, t) = \cosh(\xi(x,t)) = 0.5(t + e^{-x}) + 0.5/(t + e^{-x})\).

Consider the nonlinear wave equation \(\partial_{t}\mu(x,t) + \mu(x,t)\partial_{x}\mu(x,t)=0\). We impose initial conditions \(\mu(x,0) = f(x)\).

Let \(a(x,t,\mu) = 1\), \(b(x,t,\mu) = \mu\), \(c(x,t,\mu) = 0\).

  1. Parametrize the PDE
    • We have \(\D t(s)/\D s = a(x,t,\mu) = 1\),
    • \(\D x(s)/\D s = b(x,t,\mu) = \mu\), and
    • \(\D\mu(s)/\D s = c(x,t,\mu) = 0\).
  2. Parametrize the initial conditions
    • \(t(s_{0}) = 0\)
    • \(x(s_{0}) = \xi\)
    • \(\mu(s_{0}) = f(\xi)\)
  3. Find the characteristic curves
    • \(dt/ds = 1\) implies \(t = s - s_{0} = s\)
    • \(dx/ds = \mu\) implies \(x(s) = s\mu(s) - s_{0}\mu(s_{0}) + x(s_{0})\) or \(x(s) = s\mu(s) + \xi\)
    • \(d\mu(s)/ds = 0\) implies \(\mu(s) = \mu(s_{0}) = f(\xi)\)
  4. Eliminate the parameters \(s\) and \(\xi\)
    • We find from \(t = s\) that \(s = t\)
    • From \(x(s) = s\mu(s) + \xi\) and \(\mu(s) = f(\xi)\) we have the implicit relation \(x = tf(\xi) + \xi\). In particular, this means that \(x - t\mu = \xi\).
  5. Solve for \(\mu\)
    • We find \(\mu = f(x - t\mu)\) is the implicit relation for the equation.

Observe this relation breaks down at times \(t>0\) since \(\partial_{x}\mu = f'/(1 + tf')\) which fails to exist when the denominator vanishes \(1 + tf' = 0\).

4.3. Steps involved in solving Inhomogeneous PDES

To solve

\begin{equation} \vec{a}(\vec{x}, \mu(\vec{x}))\cdot\vec{\nabla}\mu = f(\vec{x}) \end{equation}

we enumerate the steps as follows:

  1. Parametrize the PDE: \(\D x^{i}(s)/\D s = a^{i}(\vec{x}(s), \mu(s))\) for \(i=1,\dots,n\), and \(\D\mu(s)/\D s = f(\vec{x}(s))\)
  2. Parametrize the initial conditions
    • Introduce parameters \(\xi\)
    • Set \(s=0\)
    • This gives us initial conditions for \(\vec{x}(0) = \vec{\xi}\) and \(\mu(0)=\mu_{0}(\vec{\xi})\).
  3. Find the characteristic curves
    • Solve the system of equations from step 1 to find \(\vec{x}(s)\)
    • Substitute in \(\vec\xi\) for \(\vec{x}(0)\), and \(\mu_{0}(\vec\xi)\) for \(u(0)\)
  4. Eliminate the parameters
    • We write the parameters \(\vec\xi\) introduced in step 2 in terms of the independent variables \(\vec{x}\) and dependent variable \(\mu\)
  5. Then we have the solution

Consider the inhomogeneous nonlinear differential equation \(\partial_{t}\mu + \mu\partial_{x}\mu = xe^{t}\) with \(\mu(x,0)=f(x)\).

  1. Parametrize the PDE
    • \(\D t/\D s = 1\)
    • \(\D x/\D s = \mu\)
    • \(\D\mu/\D s = xe^{t}\)
  2. Parametrize the initial conditions
    • Set \(t(0)=0\)
    • Introduce \(x(0)=\xi\)
    • \(\mu(0) = f(\xi)\)
  3. Find the characteristic curves
    • \(t = s\) from the parametrized ODEs and initial condition.
    • This renders our system a coupled system of ODEs \(\D x/\D s = \mu\) and \(\D\mu/\D s = xe^{s}\). This gives us \(\D^{2}x/\D s^{2} = xe^{s}\). The solution to this involves a linear combination of modified Bessel functions of the second kind \(K_{0}(2\sqrt{\exp(s)})\) and modified Bessel functions of the first kind \(I_{0}(2\sqrt{\exp(s)})\). But as \(s\to0\), \(K_{0}(2\sqrt{\exp(s)})\to K_{0}(2\sqrt{e})\approx 3.2974425\)
  4. Uh…wait, I don't think this is tractable anymore!

Actually, I think a change of variables is needed from \(x\) to either \(\log(x)\) or \(\exp(x)\). But I could be wrong.

5. References

  • Andrew Hogg, "Mathematical Methods". Ch. 1, lecture notes.
  • W. Strauss, An Introduction to Partial Differential Equations. John Wiley and Sons, first ed., 1992.

Last Updated 2021-06-01 Tue 10:00.