4  Partial Differential Equations

We now move on to our next topic: partial differential equations. First a definition, which you may recall from a few weeks ago: Any equation involving functions of two or more variables and some of its derivatives is a partial differential equation.

🔑 Key idea
The order of a partial differential equation is the highest order of the partial derivatives that appear; for example, the order of \(f_{xyy}+f_{yy}+3f_{yx}=0\) is \(3\).

When we do not know initial or boundary conditions for a partial differential equation, we find its general solution, which contains integration “constants”. (These might be functions of a subset of the variables!)

When we do have enough information (boundary conditions such as “\(u(t,0)\) is always 0”, or initial conditions that tell us what’s going on at \(t=0\)) to fix all the constants completely, we find the particular solution of the equation.

Partial differential equations are both very important, and very difficult to solve. In this chapter, we’ll talk about a handful of useful techniques that will help us to solve equations in a bunch of special cases. These will get us quite a long way, but certainly not as far as we can get with ordinary differential equations; trying to find solutions to a broader class of PDEs is difficult enough to be a whole career for many, many mathematicians.

In a lot of cases, “solving” a partial differential equation means “guessing what the solution looks like and then proving that we’re right”. This is a perfectly valid approach to the world, and is called “applied mathematics”.

Before getting into how to solve them, let’s first look at a few examples:

4.1 Some important partial differential equations

These are a few examples of differential equations that we would want to solve.

Examples
Example: The wave equation

In one dimension1, the wave equation for a function \(u(t,x)\) is \[\begin{aligned} \pdv{^2 u}{x^2} = \frac{1}{c^2} \pdv{^2 u}{t^2}. \end{aligned}\]

The equation describes the movement of waves in one dimension: for example, the displacement of a string that has been plucked. Here, \(c\) is a parameter called the “wave propagation speed”. Later in this chapter, we’ll find a general solution to the wave equation. For now, let’s look at it in higher dimensions.

In two spatial dimensions, the wave equation for a field \(u(x,y,t)\) is \[\begin{aligned} \pdv{^2 u}{x^2} + \pdv{^2 u}{y^2} = \frac{1}{c^2} \pdv{^2 u}{t^2}. \end{aligned}\] This describes waves on a membrane, such as the surface of a drum.

By now you can guess what’s next: in three dimensions, the field \(u(x,y,z,t)\) must satisfy \[\begin{aligned} \pdv{^2 u}{x^2} + \pdv{^2 u}{y^2} + \pdv{^2 u}{z^2} = \frac{1}{c^2} \pdv{^2 u}{t^2}. \end{aligned}\] This equation describes wave propagation in 3D - for example, the movement of sound across the room, or the propagation of electromagnetic waves (eg, light).

Some of my colleagues at Durham have built a brilliant website that helps you visualise how the solutions to different PDEs work. Here’s their page on the wave equation: Try it out!

Thinking about the wave equation is a good opportunity to revisit the idea of the gradient operator. In Chapter 3, we thought about \[\begin{aligned} \boldsymbol{\nabla} = \boldsymbol{i} \pdv{}{x} + \boldsymbol{j} \pdv{}{y}, \end{aligned}\] and said that when we calculate \(\boldsymbol{\nabla}f\) we can “multiply through” to put the \(f\) inside the vector.

Let’s boldly take the dot product of \(\boldsymbol{\nabla}\) with itself2. \[\begin{aligned} \nabla^2 &= \boldsymbol{\nabla} \cdot \boldsymbol{\nabla} \\ & = \left( \boldsymbol{i} \pdv{}{x} + \boldsymbol{j} \pdv{}{y} \right) \cdot \left( \boldsymbol{i} \pdv{}{x} + \boldsymbol{j} \pdv{}{y} \right) \\ & = \pdv{^2 }{x^2} + \pdv{^2 }{y^2}. \end{aligned}\]

This differential operator is called the Laplacian. In three (spatial) dimensions, we have \[\begin{aligned} \nabla^2 = \pdv{^2 }{x^2} + \pdv{^2 }{y^2} + \pdv{^2 }{z^2}; \end{aligned}\] as we go higher, we get more and more terms.

Using the Laplacian, we can write \[\begin{aligned} \nabla^2 u = \frac{1}{c^2} \pdv{^2 }{t^2} u. \end{aligned}\]

Note that the Laplacian only involves derivatives with respect to spatial variables - so the Laplacian of \(u(t,x,y)\) is \[\nabla^2 u = \pdv{^2 u}{x^2} + \pdv{^2 u}{y^2},\] and we don’t include the second partial derivative corresponding to time.

Examples
Example: Laplace’s equation

Somewhat simpler than the wave equation, Laplace’s equation has no time-dependence: \[\begin{aligned} \nabla^2 u = 0. \end{aligned}\] It describes steady-state situations, or equilibria: for example, the equilibrium temperature distribution in a heat-flow problem, or the electrostatic potential in uncharged regions.

When the right-hand side is a density function that does not depend on \(u\), \[\begin{aligned} \nabla^2 u = \rho(x,y,z), \end{aligned}\] we have Poisson’s equation. It describes, for example, the gravitational potential within the material of a star or planet, when \(\rho(x,y,z)\) describes how the density of the matter changes within it.

A common factor in these examples is that they are linear: the variable in question, \(u\), appears at most linearly in the problem. This is not a coincidence: they’re all deliberately chosen to be examples that we can solve. Solving nonlinear PDEs is much harder, and in general there is no very simple way to do it.

4.2 Partial differential equations: dealing with special cases

In this section, we’ll start to develop some tips and tricks for solving PDEs. We’ll start with a couple of simpler techniques, and build up to more powerful ones. The first couple come under the category of “educated guessing”.

Technique 1: Just spot the solution

Sometimes, the PDE is simple enough that we can just spot its solution. For example, if all the derivatives are with respect to the same variable, we can treat the PDE “like an ODE”, as long as we remember that “constants of integration” now means “functions that don’t depend on (e.g.) \(x\)”.

💪 Try it out
Find the general solution of \(\pdv{u}{x} = 0\), where \(u = u(x,y)\).

Answer:

If this were an ODE, the general solution would be \(u=c\), where \(c\) is a constant. When we’re working with \(u= u(x,y)\), then any function that only depends on \(y\) will give \(\pdv{u}{x} = 0\). So the general solution is \[\begin{aligned} u(x,y) = f(y). \end{aligned}\]

💪 Try it out
Find the general solution \(u(x,t)\) of \[\begin{aligned} \pdv{^2 u}{x^2} = 0 \end{aligned}\] and the particular solution when \(u(0,t) = 1\), \(u(1,t) = \sin(t)\).

Answer:

Again, if we had an ODE we’d know the general solution was \(u(x) = Ax + B\). Here our “constants” are functions of \(t\), so the general solution is \[\begin{aligned} u(x,t) = x f(t) + g(t). \end{aligned}\]

Substituting in \(x=0\) we obtain \(1 = g(t)\), and when \(x=1\), \(f(t)+ 1 = \sin(t)\).

So the particular solution is \[\begin{aligned} u(x,t) = x \sin(t) - x + 1. \end{aligned}\]

Technique 2: Direct integration

In some cases, we can just integrate the right-hand side of the PDE. This is really an extension of Technique 1, applied to slightly less straightforward examples. The key thing to remember here is that instead of constants of integration, we always get functions of the “other” variables.

💪 Try it out
Find the general solution \(z(x,y)\) of \[\begin{aligned} \pdv{z}{x,y}= 0. \end{aligned}\]

Answer:

Let’s integrate with respect to \(x\) first; the integral of zero is a constant, so \[\begin{aligned} \pdv{z}{y} = F(y) \end{aligned}\] where \(F(y)\) is our “constant of integration”, in this case an arbitrary function of \(y\).

Now when we integrate with respect to \(y\), we get \[\begin{aligned} z(x,y) & = \int^{y}F(y')dy'+g(x)\\ & = f(y)+g(x), \end{aligned}\] where \(f(y)\) is the integral of our arbitrary function \(F\), and \(g(x)\) is another arbitrary function. Since we didn’t know what \(F\) was, \(f\) can be any differentiable function. We need \(g\) to be differentiable too, or the second derivative \(\pdv{z}{x,y}\) wouldn’t exist.

💪 Try it out
Find the general solution \(z(x,y)\) of \[\begin{aligned} \pdv{z}{x \partial y} = x-y. \end{aligned}\]

Answer:

Integration with respect to \(x\), keeping \(y\) constant, gives \[\begin{aligned} \pdv{z}{y} = \frac{x^2}{2} - xy + G(y). \end{aligned}\]

Now integrating with respect to \(y\), we have \[\begin{aligned} z = \frac{1}{2} (x^2y - y^2x) + f(x) + g(y), \end{aligned}\] where \(f\) and \(g\) are again arbitrary differentiable functions.

Technique 3: When all the derivatives are in only one variable

If all the derivatives in the PDE are taken with respect to the same variable (say, \(x\)), then we can treat it “as though it were an ODE” in \(x\).

💪 Try it out
Find the general solution \(u(x,y)\) of \[\begin{aligned} \pdv{u}{x} + u = 2. \end{aligned}\]

Answer:

Let’s think about solving the ODE \[\begin{aligned} \odv{u}{x} + u = 2. \end{aligned}\] We’d see that the homogeneous version of the equation has solution \(u =e^{-x}\), and multiply through by the integrating factor \(e^x\), to get \[\begin{aligned} e^x \odv{u}{x} + e^x u &= 2 e^x \\ \odv{}{x} (u e^x) &= 2 \odv{}{x}(e^x) \\ u e^x = 2e^x + C. \end{aligned}\]

We use the exact same logic here, but remember that our integrating constants are now functions (in this case, of \(y\)): \[\begin{aligned} e^x \pdv{u}{x} + e^x u &= 2 e^x \\ \pdv{}{x} (u e^x) &= 2 \odv{}{e^x} \\ u e^x = 2e^x + f(y). \end{aligned}\]

So the general solution is \[\begin{aligned} u(x,y) = 2 + e^{-x} f(y). \end{aligned}\]

In some cases, we can use a substitution such as \(p(x,y) = \pdv{u}{x}\) to turn a PDE into an ODE; then, once we’ve solved the ODE for \(p\), we just integrate it to find \(u\).

💪 Try it out
Find the general solution \(u(x,y)\) of \[\begin{aligned} u_{xx}+\frac{u_{x}}{x}=3x+4. \end{aligned}\]

Answer:

Let \(p(x,y) = u_x(x,y)\); then our equation is \(x p_x + p = 3 x^2 + 4x\), which we can integrate straight away: \[\begin{aligned} \int x p_x + p dx &= \int 3 x^2 + 4x dx + f(y)\\ x p & = x^3 + 2x^2 + f(y) \end{aligned}\] (remember that we’re using \(f(y)\) as a constant of integration here) so \[\begin{aligned} u_x = x^2 + 2x + \frac{1}{x} f(y). \end{aligned}\]

Now we integrate again: \[\begin{aligned} u(x,y) = \frac{x^3}{3} + x^2 + \ln(x) f(y) + g(y). \end{aligned}\]

💪 Try it out
Find the general solution \(u(x,y)\) of \[\begin{aligned} a(y)u_{xx}+b(y)u_{x}+c(y)u=0. \end{aligned}\]

Answer:

As long as there are no \(y\)-derivatives, we can simply extend our techniques for the ODE’s. That is, we will get an auxiliary equation, \[\begin{aligned} a\lambda^{2}+b\lambda+c=0, \end{aligned}\] where \(\lambda\) is now a function of \(y\). For example, \[\begin{aligned} u_{xx}-2yu_{x}+y^{2}u=0 \end{aligned}\] gives \[\begin{aligned} \lambda^{2}-2y\lambda+y^{2}=(\lambda-y)^{2}=0 \end{aligned}\] so \(\lambda = y\) and the general solution is \(u=(A+Bx)e^{yx}\).

Essentially there are not many rules except that the standard techniques for ODEs work, until the moment that they don’t. Usually if there are no \(y\) derivatives they will work.

Technique 4: Using a change of coordinates

Sometimes a change of coordinates can turn a PDE that looks difficult into one that looks much simpler. You can usually spot these when your equation looks like \[\begin{aligned} \pdv{u}{x} = c \pdv{u}{t}. \end{aligned}\] This will also work if you’ve got higher-order partial derivatives, as long as they’re not mixed, for example, \[\begin{aligned} \pdv{^3 u}{x^3} = c \pdv{^3 u}{t^3}. \end{aligned}\]

The idea here is to try writing \[\begin{aligned} u(x,t) = f(x + \alpha t). \end{aligned}\]

If you like, you can think of this as the composition of \(f\) with \(s(x,t) = x+\alpha t\). Then the chain rule tells us that \[\begin{aligned} \pdv{u}{x} &= \odv{f}{s} \pdv{s}{x} = f'(x+\alpha t) \\ \pdv{u}{t} &= \odv{f}{s} \pdv{s}{t} = \alpha f'(x+\alpha t). \end{aligned}\] When we look into the conditions we’d need to impose on \(f\) to make the equation work, we usually only find one or two (usually, we can work out the value \(\alpha\) needs to take, and we need \(f\) to be differentiable).

💪 Try it out
Find the general solution \(u(x,t)\) of \(3u_{x}-u_{t}=0.\)

Answer:

Let’s try the solution \(u(x,t) = f(x + \alpha t)\). Then \[\begin{aligned} \pdv{u}{x} &= f'(x+\alpha t) \\ \pdv{u}{t} &= \alpha f'(x+\alpha t). \end{aligned}\] Substituting these values into our PDE, we get \[\begin{aligned} (3-c) f'(x+\alpha t) = 0. \end{aligned}\]

When \(\alpha=3\), \(u(x,t) = f(x+3t)\) is a solution to the PDE, whatever the choice of function \(f\).

Examples: The wave equation
The wave equation, as we met earlier in the chapter, is \[\begin{aligned} \pdv{^2 u}{x^2} = \frac{1}{c} \pdv{^2 u}{t^2}. \end{aligned}\] It describes, among other things, the movement of waves along a string.

Let’s try \(u(x,t) = f(x + \alpha t)\) as a solution: we have \[\begin{aligned} \pdv{u}{x} &= f'(x + \alpha t) \\ \pdv{u}{t} &= \alpha f'(x + \alpha t), \end{aligned}\] so \[\begin{aligned} \pdv{^2 u}{x^2} = f''(x + \alpha t), \\ \pdv{^2 u}{t^2} = \alpha^2 f''(x+\alpha t). \end{aligned}\] This suggests that we can take two different possible values for \(\alpha\): \(\alpha = c\) and \(\alpha = -c\). We get the general solution \[\begin{aligned} u(x,t) = f(x+ct) + g(x-ct), \end{aligned}\] where \(f\) and \(g\) are any twice-differentiable functions of one variable.

The solution to the wave equation in this form was first derived by d’Alembert, and it’s often called d’Alembert’s solution to the wave equation.

What is actually happening in this solution? If we set \(f\) as the function which has a single “bump” in it, and look at \(u(x,t) = f(x+ct)\), we will see the profile of the “bump” moving to the left as \(t\) increases. Similarly, the contribution from \(g\) represents a bump moving to the right as time goes on.

You can see a nice visualisation of this here.

It’s pretty common to consider a solution to the wave equation on an interval of length \(L\): \[x \in [0,L].\] If we fix the ends, so that \(u(0,t) = u(L,t)=0\), then we get solutions of the form \[u=u_{0}\,\left[\sin\big(\frac{\pi}{L}(x+ct)\big)+\sin\big(\frac{\pi}{L}(x-ct)\big)\right].\] Using \(\sin(A+B)+\sin(A-B)=2\cos B\,\sin A\), we can even write this as \[u=2u_{0}\cos\left(\frac{\pi c}{L}t\right)\,\sin\left(\frac{\pi}{L}x\right).\] i.e. it indeed is a string oscillating in time and in space.

4.3 Separation of variables: solving more general linear PDEs

The final technique we will learn is the method of separation of variables, which can be used to solve a more general class of linear PDEs. The basic idea is to convert the PDE into an ODE, which we can figure out how to solve.

The big idea here is to ask ourselves: if we can write \(u(x,t)\) as a product of a function depending only on \(x\), with a function depending only on \(t\), will that make this equation easier to solve?. The answer, in many cases, is yes.

To explain the method of separation of variables, we use our old friend the wave equation3.

Examples: The wave equation again
Remember that the wave equation is \[\begin{aligned} \pdv{^2 u}{x^2} = \frac{1}{c^2} \pdv{^2 u}{t^2}. \end{aligned}\]

Before starting, let’s note that the equation is linear in \(u\); and thus if we have two solutions \(u_1(x,t)\), \(u_2(x,t)\) to the PDE then the sum of the two \[u(x,t) = u_1(x,t) + u_2(x,t)\] will also be a solution.

Let’s forget all about the “change of coordinates” solution for a moment, and make the radical assumption that we can write \(u\) in the form \[\begin{aligned} u(x,t) = X(x) T(t). \end{aligned}\]

Then \[\begin{aligned} \pdv{u}{x} = X'(x) T(t) && \text{so} && \pdv{^2 u}{x^2} = X''(x)T(t) \\ \pdv{u}{t} = X(x) T'(t) && \text{so} && \pdv{^2 u}{t^2} = X(x) T''(t). \end{aligned}\]

Then we can write the wave equation, \[\begin{aligned} \pdv{^2 u}{x^2} = \frac{1}{c^2} \pdv{^2 u}{t^2}, \end{aligned}\] in the form \[\begin{aligned} X''(x) T(t) = \frac{1}{c^2} X(x) T''(t), \end{aligned}\] or, even better, \[\begin{aligned} \frac{X''(x)}{X(x)} = \frac{1}{c^2} \frac{T''(t)}{T(t)}. \end{aligned}\] Notice that what we’ve got here is an equation in which the left-hand side depends only on \(x\), and the right-hand side depends only on \(t\). The only way this can possibly work is if they’re both equal to a constant! Let’s call it \(p\). Now we have two independent ODEs: \[\begin{aligned} X''(x) & = p X(x) \\ T''(t) & = c^2 p T(t). \end{aligned}\] (to be continued…)

ODE recap

Let’s remind ourselves how to solve \[X_{xx}(x)-pX(x)=0,\] where \(X=X(x)\) and \(p\) is a real number. The general solution depends on \(p\).

  1. If \(p=0\), we integrate as usual to find \[X=Ax+B\] where \(A,B\) are constant.

  2. If \(p=\alpha^{2}>0\), i.e. \(p\) is positive, the equation is linear. So we have the auxiliary equation \[\lambda^2 - p = 0,\] and we find that \(\lambda=\pm\alpha\). The general solution is \[X=Ae^{\alpha x}+Be^{-\alpha x}\] where \(A,B\) are constant.

  3. If \(p=-k^{2}<0\), i.e. \(p\) is negative, again we can form the auxiliary equation, which is the same as above, except that now we find \(\lambda=\pm ik\). So the general solution is \[X=Ce^{ikx}+De^{-ikx}\] where \(C,D\) are constant. Alternatively we can write \[X=A\sin(kx)+B\cos(kx). \tag{4.1}\]

Note the difference in character between the case where \(p\) is positive and negative; in one case we get oscillatory solutions and in the other case we get exponentially growing and falling solutions.

💪 Try it out
Discuss possible solutions if we impose the boundary condition \(X(0) = X(L) = 0\).

Answer:

  1. When \(p=0\) we get \[\begin{aligned} a \cdot 0 + b &=0 \\ a \cdot L + b &=0, \end{aligned}\] so we must have \(a = b = 0\): the only solution satisfying the boundary condition is therefore \(X=0\).

  2. When \(p=\alpha^{2}>0\), now \(X=Ae^{\alpha x}+Be^{-\alpha x}\) and imposing the boundary condition gives \[\begin{aligned} A+B & = 0\\ Ae^{\alpha L}+Be^{-\alpha L} & = 0 \end{aligned}\] So \(A=-B\) and the second condition gives \(\sinh(\alpha L)=0\). The only solution is \(\alpha L=0\) (since \(\sinh\) is monotonically increasing). This means that the only solution is once again \(X=0\).

  3. When \(p=-k^{2}<0\), now \(X=A\cos(kx)+B\sin(kx)\) and imposing the boundary condition gives \[\begin{aligned} A & = 0\\ B\sin(kL) & = 0. \end{aligned}\] Now the second condition gives \(\sin(kL)=0\) which does have a nontrivial solution if \(k=\frac{n\pi}{L}\). The solution is then \[\begin{aligned} X(x) = B\sin\left(\frac{n\pi}{L}x\right). \end{aligned}\]

So the point here is that only the third class of solution – where \(p\) is negative – gives us oscillatory solutions. This turns out to be important in what follows.

Let’s go back to the wave equation.

Examples: The wave equation, part 3
If we’re expecting solutions that describe the oscillations of a string, let’s set \(p=-k^2\). We then find \[X_{xx} = -k^2 X \qquad X(x) = A \sin(kx) + B \cos(kx),\] and \[T_{tt} = -k^2c^2 T \qquad T(t) = C \sin(ckt) + D \cos(ckt).\] Now we can assemble these to make a solution \(u(x,t) = X(x) T(t)\). We can adjust \(A,B,C,D\) – and take other linear combinations – depending on the boundary conditions and initial conditions of interest. It should be clear that you can get every possible combination of sines and cosines in \(x\) and \(t\): e.g. one possibility is \[u = \cos(c k t) \cos(kx) - \sin(c k t) \sin(k x) = \cos(k(x + ct))\] i.e. a traveling wave to the left! So this is the same sort of solution as from d’Alembert’s solution, here derived slightly differently.

Another example of a PDE where the method of separation of variables can help us is the diffusion equation.

Examples: The diffusion equation
The diffusion equation takes the form \[u_{xx} = \frac{1}{c} u_{t} \qquad u = u(x,t). \tag{4.2}\] It describes – for example – the diffusion of heat in one dimension, say along a metal rod of length \(L\). To make this problem well-posed (i.e. so that it has a solution), we need to set boundary conditions; we will pick these to be: \[u(x = 0, t) = 0 \qquad u(x = 3, t) = 0 \tag{4.3}\] (As an aside, these are called Dirichlet boundary conditions.) If you want the keep the heat application, you could imagine that we are attaching both ends of the rod to giant blocks of ice that keep the temperature fixed and cold). I need to also specify initial conditions; in this case, we will take the following initial condition: \[u(x,t = 0) = 5 \sin\left(4\pi x\right). \label{initcond}\] Now the problem is completely well-posed and we can solve it, i.e. we can find \(u(x,t)\) for all \(t > 0\) given the initial conditions above.

Once again, let’s assume that we can write \[u(x,t) = X(x) T(t),\] and insert this into the PDE Equation 4.2.

We find \[X_{xx}(x) T(t) = \frac{1}{c} X(x) T_{t}(t),\] or \[\frac{X_{xx}(x)}{X(x)} = \frac{1}{c} \frac{T_{t}(t)}{T(t)} = p.\]

Now our two independent ODEs are \[X_{xx}(x) = p X(x) \tag{4.4}\] and \[T_{t}(t) = c p T(t). \tag{4.5}\]

The three possible scenarios are:

  • \(p=0\). Then our solutions are of the form \(X(x) = Ax+B\) and \(T(t) = 1\).

  • \(p = \alpha^2>0\). Then our solutions are of the form \(X(x) = Ae^{\alpha x} + B e^{-\alpha x}\) and \(T(t) = e^{c\alpha^2 t}\).

  • \(p = -k^2 < 0\). Now our solutions are of the form \(X(x) = A \sin(kx) + B \cos(kx)\) and \(T(t) = e^{-c\alpha t}\).

Because this equation is describing the diffusion of heat through a solid, we do not expect \(u\) to go off to \(+\infty\), or \(-\infty\), as \(t \to 0\). We should therefore not look at solutions with \(p>0\).

On the other hand, the boundary conditions which requre \(x\) to vanish at \(x=0\) and \(x=3\) imply that if \(p=0\), we must have \(B=0\) (from the \(x=0\) condition) and so \(A=0\) too (from the \(x=3\) condition). This just gives us the trivial solution \(u(x,t) = 0\).

To find interesting solutions, we must have \(p=-k^2 <0\), and so Equation 4.4 tells us that \[X(x) = A \sin (kx) + B \cos (kx).\] The boundary conditions imply that \(X(0) = X(3) = 0\), so we must have \[\begin{aligned} k = \frac{n \pi}{L}, \qquad B = 0, \qquad X(x) = A \sin \left(\frac{n \pi}{3} x\right). \end{aligned}\] Meanwhile, Equation 4.5 tells us that \[T_{t} = -c k^2 T(t),\] which implies that \[\begin{aligned} T(t) = T_0 e^{-c k^2 t}. \end{aligned}\]

Combining these parts, we find that the general solution to the diffusion equation, with boundary conditions \(u(0,t) = u(3,t)=0\), must have the form \[u_{n}(x,t) = X(x) T(t) = C \sin \left(\frac{n \pi}{3} x\right) e^{- c \left(\frac{\pi n}{3}\right)^2 t}.\] (Here the constant \(C\) has replaced \(A\) and \(T_0\); because we were multiplying two arbitrary constants together, we don’t need to write both of them.)

Let’s recall what we have done: we have found a family of solutions to the diffusion equation, all of which satisfy the boundary conditions Equation 4.3. The solutions are each labeled by a single integer \(n\); they have oscillatory spatial profiles, and they each decay exponentially in time with a rate that is faster if the spatial profile is more wiggly. This is some solid analysis.

Now remember that we saw earlier that the sum of any two (or three, or four) solutions is also a solution. Thus the general solution to the PDE will take the form: \[u(x,t) = \sum_{n} a_n u_{n}(x,t) = \sum_{n} a_n \sin \left(\frac{n \pi}{3} x\right) e^{-c \left(\frac{\pi n }{3}\right)^2 t} \tag{4.6}\] where the \(a_n\)s are numbers that remain to be adjusted by the initial conditions.

Let’s revisit the initial condition: \[\begin{aligned} u(x,0) = 5 \sin \left( 4 \pi x \right). \end{aligned}\] Looking at this, we should choose \(a_{12} = 5\), and \(a_i = 0\) for all the other \(i\)s.

Then the final version of the solution is \[\begin{aligned} u(x,t) = 5 \sin \left(4\pi x\right) e^{- 16 c \pi^2 t}. \end{aligned}\]

Now you may ask: how would we have done this if we did not have such a simple initial condition; for example what if we had something like \(u(x,t=0) = x(x-3)\)? Then we wouldn’t have been able to just pick one of the \(a_i\)’s to be nonzero; instead, evaluating Equation 4.6 at \(t = 0\) we see that we would be searching for something like \[\sum_{n} c_n \sin\left(\frac{n \pi}{3} x\right) = x(x-3).\] This is exactly the sort of problem that Fourier series are meant to solve! I leave it to you to fill in the gaps here.


  1. Ok, \(u(t,x)\) clearly has two variables, not one; to be completely precise, this is the wave equation in one spatial dimension.↩︎

  2. Ignore the screams of mathematicians and just press on.↩︎

  3. This is the last time we’ll talk about it, I promise↩︎