Examples of PDEs.
The Schrödinger equation: \[i \hbar \Psi_t = - \frac{\hbar^2}{2m} \Delta \Psi + V \Psi.\] This is a linear second-order PDE.
The \(p\)–Laplace equation: \[\mathrm{div} (|\nabla u|^{p-2} \nabla u) = 0.\] If \(p=2\) this is just Laplace’s equation and therefore a linear second-order PDE. If \(p \in (2,\infty)\), then this is a quasilinear second-order PDE. One way of seeing this is to expand the divergence on the left-hand side. By using the vector identity \[\mathrm{div} (\varphi \boldsymbol{f}) = \nabla \varphi \cdot \boldsymbol{f}+ \varphi \, \mathrm{div} \boldsymbol{f}\] we obtain \[\mathrm{div} (|\nabla u|^{p-2} \nabla u) = \nabla (|\nabla u|^{p-2}) \cdot \nabla u + |\nabla u|^{p-2} \mathrm{div} \nabla u.\] Recall that \(\nabla |\boldsymbol{x}| = \frac{\boldsymbol{x}}{|\boldsymbol{x}|}\). By the Chain Rule, the \(i\)–th component of \(\nabla (|\nabla u|^{p-2})\) is \[\frac{\partial}{\partial x_i} |\nabla u|^{p-2} = (p-2) |\nabla u|^{p-3} \sum_{j=1}^n \frac{u_{x_j}}{|\nabla u|} \frac{\partial u_{x_j}}{\partial x_i} = (p-2) |\nabla u|^{p-4} \sum_{j=1}^n u_{x_i x_j} u_{x_j}.\] In vector notation: \[\nabla (|\nabla u|^{p-2}) = (p-2) |\nabla u|^{p-4} D^2u \, \nabla u\] where \(D^2 u\) is the matrix of second partial derivatives of \(u\), which has components \([D^2 u]_{ij} = u_{x_i x_j}\). Therefore the \(p\)–Laplace equation is \[(p-2) |\nabla u|^{p-4} \nabla u \cdot D^2u \, \nabla u + |\nabla u|^{p-2} \Delta u = 0,\] which is clearly quasilinear and second-order.
The Cahn-Hilliard equation: \[u_t = k \Delta (u^3 - u - \varepsilon^2 \Delta u).\] This is a semilinear fourth-order PDE. If you can’t see this immediately, then expand the right-hand side of the PDE to obtain \[u_t = k \Delta(u^3) - k \Delta u - k\varepsilon^2 \Delta^2 u\] where \(\Delta^2 u = \Delta (\Delta u)\) is the bilaplacian operator. You should already be able to see that this is a semilinear fourth-order PDE but, if not, then expand the right-hand side further to obtain \[u_t = 6 ku |\nabla u|^2 + 3 ku^2 \Delta u - k \Delta u - k\varepsilon^2 \left( \sum_{i=1}^n \sum_{j=1}^n \frac{\partial^4 u}{\partial x_i^2 x_j^2} \right)\] where we have used that \(\nabla (u^3) = 3 u^2 \nabla u\) and so \[\Delta(u^3)=\mathrm{div} \, \nabla (u^3) = \mathrm{div}(3 u^2 \nabla u) = \nabla(3 u^2) \cdot \nabla u + 3 u^2 \mathrm{div} \, \nabla u = 6 u |\nabla u|^2 + 3 u^2 \Delta u.\]
The nonlinear wave equation: \[\rho \, \boldsymbol{r}_{tt} = \frac{\partial}{\partial x} \left( N(|\boldsymbol{r}_x|) \frac{\boldsymbol{r}_x}{|\boldsymbol{r}_x|} \right) + \boldsymbol{f}.\] Expanding the right-hand side gives \[\begin{aligned} \rho \, \boldsymbol{r}_{tt} & = N'(|\boldsymbol{r}_x|) \frac{\partial |\boldsymbol{r}_x|}{\partial x} \frac{\boldsymbol{r}_x}{|\boldsymbol{r}_x|} + N(|\boldsymbol{r}_x|) \frac{\boldsymbol{r}_{xx}}{|\boldsymbol{r}_x|} - N(|\boldsymbol{r}_x|)\frac{\boldsymbol{r}_{x}}{|\boldsymbol{r}_x|^2} \frac{\partial |\boldsymbol{r}_x|}{\partial x} + \boldsymbol{f} \\ & = N'(|\boldsymbol{r}_x|) \left( \frac{\boldsymbol{r}_x}{|\boldsymbol{r}_x|} \cdot \boldsymbol{r}_{xx} \right) \frac{\boldsymbol{r}_x}{|\boldsymbol{r}_x|} + N(|\boldsymbol{r}_x|) \frac{\boldsymbol{r}_{xx}}{|\boldsymbol{r}_x|} - N(|\boldsymbol{r}_x|)\frac{\boldsymbol{r}_{x}}{|\boldsymbol{r}_x|^2} \left( \frac{\boldsymbol{r}_x}{|\boldsymbol{r}_x|} \cdot \boldsymbol{r}_{xx} \right) + \boldsymbol{f}. \end{aligned}\] The highest derivatives \(\boldsymbol{r}_{tt}\) and \(\boldsymbol{r}_{xx}\) appear linearly and their coefficients only depend on lower order derivatives. Therefore the nonlinear wave equation is a quasilinear second-order system of PDEs.
Characterisation of PDEs.
General form of a second-order scalar PDE in two independent variables: \[F(x,y,u,u_x,u_y,u_{xx},u_{x,y},u_{yy})=0\] or in vector notation \[F(\boldsymbol{x},u,\nabla u,D^2 u) = 0, %\qquad %\textrm{or} \qquad %F(\bx,u,D u,D^2 u) = 0.\] where \(F:\Omega\times\mathbb{R}\times\mathbb{R}^2\times\mathbb{R}^4\to\mathbb{R}\) is a given function, \(\Omega\subseteq\mathbb{R}^2\) is a given domain and \(u:\Omega\to\mathbb{R}\) stands for the unknown function.
General form of a linear, second-order scalar PDE in two independent variables: \[a_{11}(x,y)u_{xx} + 2a_{12}(x,y)u_{xy} + a_{22}(x,y)u_{yy} + b_1(x,y) u_x + b_2(x,y) u_y + c(x,y)u + d(x,y) = 0.\] The factor of \(2\) is just so that we can write this equation more compactly in vector notation as \[A(x,y) : D^2 u + \boldsymbol{b}(x,y) \cdot \nabla u + c(x,y)u + d(x,y)=0\] where \(A:\Omega\to\mathbb{R}^{2\times 2}\) (a symmetric matrix valued function), \(\boldsymbol{b}:\Omega\to\mathbb{R}^2\) and \(c,d:\Omega\to\mathbb{R}\) are given, \(\Omega\subseteq\mathbb{R}^2\) is a given domain and \(u:\Omega\to\mathbb{R}\) stands for the unknown function.
General form of a semilinear, second-order scalar PDE in two independent variables: \[a_{11}(x,y)u_{xx} + 2a_{12}(x,y)u_{xy} + a_{22}(x,y)u_{yy} + b(x,y,u,u_x,u_y) = 0\] or in vector notation \[A(x,y) : D^2 u + b(x,y,u,\nabla u) = 0\] where \(A:\Omega\to\mathbb{R}^{2\times 2}\) (a symmetric matrix valued function) and \(b:\Omega\times\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}\) are given, \(\Omega\subseteq\mathbb{R}^2\) is a given domain and \(u:\Omega\to\mathbb{R}\) stands for the unknown function.
General form of a quasilinear, second-order scalar PDE in two independent variables: \[a_{11}(x,y,u,u_x,u_y)u_{xx} + 2a_{12}(x,y,u,u_x,u_y)u_{xy} + a_{22}(x,y,u,u_x,u_y)u_{yy} +b(x,y,u,u_x,u_y) = 0\] or in vector notation \[A(x,y,u,\nabla u) : D^2 u + b(x,y,u,\nabla u) = 0\] where \(A:\Omega\times\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}^{2\times 2}\) (a symmetric matrix valued function) and \(b:\Omega\times\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}\) are given, \(\Omega\subseteq\mathbb{R}^2\) is a given domain and \(u:\Omega\to\mathbb{R}\) stands for the unknown function.
The transport equation: Derivation of the travelling wave solution.
Observe that \[\nabla u \cdot \begin{pmatrix} c \\ 1 \end{pmatrix} = \begin{pmatrix} u_x \\ u_t \end{pmatrix} \cdot \begin{pmatrix} c \\ 1 \end{pmatrix} = c u_x + u_t.\] Therefore \[u_t + c u_x = 0 \quad \Longleftrightarrow \quad \nabla u \cdot \begin{pmatrix} c \\ 1 \end{pmatrix} = 0.\]
It follows that \(u\) is constant in the direction \((c,1)^\mathrm{T}\). In particular, for each \(x_0 \in \mathbb{R}\), \(u\) is constant on the line that passes through \((x,t)=(x_0,0)\) and has direction \((c,1)^\mathrm{T}\). Points on this line have the form \[\begin{pmatrix} x \\ t \end{pmatrix} = \begin{pmatrix} x_0 \\ 0 \end{pmatrix} + \lambda \begin{pmatrix} c \\ 1 \end{pmatrix} = \begin{pmatrix} x_0 + c \lambda \\ \lambda \end{pmatrix} , \quad \lambda \in \mathbb{R}.\] Since \(u\) is constant on this line, \(u(x_0 + c \lambda, \lambda)\) is independent of \(\lambda\). In particular, we obtain the same value if we take \(\lambda=t\) and \(\lambda = 0\): \[u(x_0 + c t, t) = u (x_0 + c \cdot 0,0) = u(x_0,0) = g(x_0)\] by the initial condition. Therefore \[u(x_0 + c t, t) = g(x_0).\] Making the change of variables \(x = x_0 + c t\) gives \[u(x,t) = g(x-ct)\] as required.
The transport equation on \(\mathbb{R}^n\). The function \[u(\boldsymbol{x},t) = g(\boldsymbol{x}- \boldsymbol{c}t)\] satisfies the transport equation \[\begin{aligned} u_t + \boldsymbol{c}\cdot \nabla u = 0 \quad & \textrm{for } (\boldsymbol{x},t) \in \mathbb{R}^n \times (0,\infty), \\ u(\boldsymbol{x},0) = g(\boldsymbol{x}) \quad & \textrm{for } x \in \mathbb{R}. \end{aligned}\]
The transport equation with boundary conditions.
We want to solve the initial-boundary value problem \[\begin{aligned} u_t + 4 u_x = 0 \quad & \textrm{for } (x,t) \in (0,\infty) \times (0,\infty), \\ u(x,0) = 0 \quad & \textrm{for } x \in (0,\infty), \\ u(0,t) = t^2 e^{-t} \quad & \textrm{for } t \in [0,\infty). \end{aligned}\] Method 1: Use physical intuition to guess the solution. We know that the transport equation has the property of transporting information with velocity \(c\). In this case \(c=4\) and so information is transported to the right with speed \(4\). Initially \(u\) is zero everywhere. Fix a point \(x>0\). The boundary data \(u(0,t)=t^2 e^{-t}\) is transported to the right with speed \(4\) and so will take \(x/4\) amount of time to reach the point \(x\) from the boundary point \(0\) (recall that time = distance/speed). Therefore \(u(x,t)=0\) up until time \(t=x/4\).
From this point in time onwards, the value of \(u\) will be determined by the boundary data. The value of \(u\) at point \(x\) at time \(t\) will be the same as the value of \(u\) at the boundary point \(0\) at time \(t-x/4\), i.e., \(x/4\) amount of time in the past. Therefore \(u(x,t)=u(0,t-x/4) = (t-x/4)^2 \exp(-(t-x/4))\). We have argued that \[u(x,t) = \left\{ \begin{array}{cl} 0 & t < \frac{x}{4}, \\ (t-\frac{x}{4})^2 e^{-(t - \frac{x}{4})} & t \ge \frac{x}{4}. \end{array} \right.\] We can rewrite this as \[\begin{aligned} u(x,t) & = \left\{ \begin{array}{cl} 0 & x-4t > 0, \\ \left(\frac{x-4t}{4} \right)^2 e^{\frac14 (x-4t)} & x - 4t \le 0, \end{array} \right. \\ & = f(x-4t) \end{aligned}\]
where \[f(y) = \left\{ \begin{array}{cl} 0 & y > 0, \\ \left( \frac{y}{4} \right)^2 e^{\frac y4 } & y \le 0. \end{array} \right.\] Note that \(f\) is continuously differentiable since the left and right limits of \(f\) and \(f'\) at 0 agree: \(f(0^+)=f(0^-)=0\), \(f'(0^+)=f'(0^-)=0\). (This is not the case for the original problem in Shearer and Levy (2015).) Therefore \(u\) is also continuously differentiable. It is easy to check that \(u\) satisfies the PDE.
Method 2: Seek a travelling wave solution of the form \[u(x,t) = f(x-4t)\] and use the boundary and initial conditions to find \(f\). Clearly \(u\) satisfies \(u_t + 4 u_x = 0\). The initial condition \(u(x,0)=0\) for \(x >0\) implies that \[f(x) = 0 \, \textrm{ for } x > 0.\] The boundary condition \(u(0,t) = t^2 e^{-t}\) for \(t \ge 0\) implies that \[f(-4t) = t^2 e^{-t} \, \textrm{ for } t \ge 0 \quad \Longleftrightarrow \quad f(x) = \left(\frac{x}{4} \right)^2 e^{\frac x4} \, \textrm{ for } x \le 0.\] Therefore \[f(x) = \left\{ \begin{array}{cl} 0 & x > 0, \\ \left( \frac{x}{4} \right)^2 e^{\frac x4 } & x \le 0. \end{array} \right.\] as we found using Method 1.
Now we seek a solution of the form \(u(x,t)=f(x+4t)\) of \[\begin{aligned} u_t - 4 u_x = 0 \quad & \textrm{for } (x,t) \in (0,\infty) \times (0,\infty), \\ u(x,0) = 0 \quad & \textrm{for } x \in (0,\infty), \\ u(0,t) = t^2 e^{-t} \quad & \textrm{for } t \in [0,\infty). \end{aligned}\] Clearly \(u\) satisfies \(u_t - 4 u_x = 0\). The initial condition \(u(x,0)=0\) for \(x >0\) implies that \[f(x) = 0 \, \textrm{ for } x > 0.\] The boundary condition \(u(0,t) = t^2 e^{-t}\) for \(t \ge 0\) implies that \[f(4t) = t^2 e^{-t} \, \textrm{ for } t \ge 0 \quad \Longleftrightarrow \quad f(x) = \left( \frac{x}{4} \right)^2 e^{-\frac x4} \, \textrm{ for } x \ge 0.\] We have arrived at the contradiction \(f(x)=0\) and \(f(x) = (x/4)^2 \exp(-x/4)\) for \(x \ge 0\). This also has a physical explanation: In this case, since \(c=-4\), information is transported to the left with speed \(4\), i.e., towards the boundary. The boundary \(x=0\) is an outflow boundary, whereas in part (i) the boundary \(x=0\) was an inflow boundary. This means that the initial data \(u=0\) is transported to the boundary. But this is not compatible with the boundary condition \(u(0,t) = t^2 e^{-t} \ne 0\) for \(t>0\).
The heat equation on the real line. Define \(u:\mathbb{R} \times (0,\infty) \to \mathbb{R}\) by \[u(x,t) = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy.\]
By bringing the partial derivatives inside the integral and using the product rule and chain rule, we compute
\[\begin{aligned} u_t & = - \frac{1}{2t} \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy + \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} \tfrac{(x-y)^2}{4kt^2} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy, \\ u_x & = - \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} \tfrac{2(x-y)}{4kt} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy, \\ u_{xx} & = - \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} \tfrac{1}{2kt} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy + \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} \left( \tfrac{2(x-y)}{4kt} \right)^2 e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy \\ & = - \frac{1}{2kt} \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy + \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} \tfrac{(x-y)^2}{4k^2t^2} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy \\ & = \frac{1}{k} u_t. \end{aligned}\] Therefore \(u\) satisfies \(u_t = k u_{xx}\) for \(x \in \mathbb{R}\), \(t>0\). To make this proof completely rigorous, you should prove that the partial derivatives can be brought inside the integral, but this goes beyond the scope of this course.
Substituting \(g(x)=u_0\) into the integral formula for \(u\) gives \[\begin{aligned} u(x,t) & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} u_0 \, dy \\ & = \frac{u_0}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{-z^2} \sqrt{4 k t} \, dz & \textrm{(by substituting } z = (y-x)/\sqrt{4kt} \textrm{)} \\ & = \frac{u_0}{\sqrt{\pi}} \int_{-\infty}^{\infty} e^{-z^2} \, dz \\ & = u_0 \end{aligned}\] since the integral on the right-hand side is the Gaussian integral. Therefore \(u(x,t)=u_0\).
This answer should not come as a surprise since clearly \(u=u_0\) satisfies \(u_t = k u_{xx}\) and \(u(x,0)=u_0\). Also, thinking in physical terms, a metal bar at constant initial temperature \(g(x)=u_0\) remains at this temperature if there are no external heat sources or sinks; every point is already at the same temperature and so there is no diffusion of heat.
The fact that the integral of the Gaussian over \(\mathbb{R}\) is \(\sqrt{\pi}\) can be proved using polar coordinates as follows: \[\begin{aligned} \left( \int_{-\infty}^{\infty} e^{-z^2} \, dz \right)^2 & = \left( \int_{-\infty}^{\infty} e^{-x^2} \, dx \right) \left( \int_{-\infty}^{\infty} e^{-y^2} \, dy \right) \\ & = \int_{-\infty}^\infty \! \int_{-\infty}^\infty e^{-(x^2+y^2)} \, dxdy \\ & = \int_{\mathbb{R}^2} e^{-|\boldsymbol{x}|^2} \, d \boldsymbol{x} \\ & = \int_0^{2 \pi} \int_0^{\infty} e^{-r^2} r \, dr d \theta & \textrm{(using polar coordinates)} \\ & = 2 \pi \left. \left(-\tfrac 12 \right) e^{-r^2} \right|_{0}^{\infty} \\ & = \pi. \end{aligned}\]
We estimate \[\begin{aligned} | u(x,t) | & = \left| \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} g(y) \, dy \right| \\ & \le \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} e^{- \frac{(x-y)^2}{4kt}} |g(y)| \, dy \\ & \le \frac{1}{\sqrt{4 \pi k t}} \, \sup_{y \in \mathbb{R}} \, e^{- \frac{(x-y)^2}{4kt}} \, \int_{-\infty}^{\infty} |g(y)| \, dy & (\textrm{since } g \in L^1(\mathbb{R})) \\ & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^{\infty} |g(y)| \, dy & \Big( \textrm{since } \sup_{y \in \mathbb{R}} \, e^{- \frac{(x-y)^2}{4kt}} = \sup_{z \in \mathbb{R}} e^{-z^2} = 1 \Big) \\ & = \frac{1}{\sqrt{4 \pi k t}} \, \| g \|_{L^1(\mathbb{R})}. \end{aligned}\] We have shown that \[|u(x,t)| \le \frac{1}{\sqrt{4 \pi k t}} \, \| g \|_{L^1(\mathbb{R})} \to 0 \quad \textrm{as } t \to \infty.\] Therefore, for each \(x \in \mathbb{R}\), \(u(x,t) \to 0\) as \(t \to \infty\), as desired. This does not contradict part (ii) since \(g \notin L^1(\mathbb{R})\) if \(g(x) = u_0\) and \(u_0\) is a nonzero constant.
Interchanging the order of integration (which is allowed by Tonelli’s Theorem) gives \[\begin{aligned} \int_{-\infty}^\infty u(x,t) \, dx & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4kt}} g(y) \, dy \right) \, dx \\ & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4kt}} g(y) \, dx \right) \, dy \\ & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4kt}} \, dx \right) g(y) \, dy \\ & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-\frac{z^2}{4kt}} \, dz \right) g(y) dy & (z=x-y) \\ & = \frac{1}{\sqrt{4 \pi k t}} \int_{-\infty}^\infty \sqrt{4 \pi k t} \, g(y) \, dy \\ & = \int_{-\infty}^\infty g(y) \, dy \end{aligned}\] as required. We used the following formula for the integral of the Gaussian: \[\int_{-\infty}^{\infty} e^{-a z^2} \, dz = \sqrt{\frac{\pi}{a}}.\]
Revision of vector calculus and integration by parts in many variables.
By the definition of the divergence operator and the product rule \[\begin{aligned} \mathrm{div} (\varphi \boldsymbol{f}) & = \sum_{i=1}^n \frac{\partial}{\partial x_i} (\varphi \boldsymbol{f})_i \\ & = \sum_{i=1}^n \frac{\partial}{\partial x_i} (\varphi f_i) \\ & = \sum_{i=1}^n \left( \frac{\partial \varphi}{\partial x_i} f_i + \varphi \frac{\partial f_i}{\partial x_i} \right) \\ & = \nabla \varphi \cdot \boldsymbol{f}+ \varphi \, \mathrm{div} \boldsymbol{f}. \end{aligned}\]
Integrate the identity \(\nabla \varphi \cdot \boldsymbol{f}+ \varphi \, \mathrm{div} \boldsymbol{f}= \mathrm{div} (\varphi \boldsymbol{f})\) over \(\Omega\) to obtain \[\int_{\Omega} \left( \nabla \varphi \cdot \boldsymbol{f}+ \varphi \, \mathrm{div} \boldsymbol{f} \right) \, d \boldsymbol{x}= \int_{\Omega} \mathrm{div} (\varphi \boldsymbol{f})\, d \boldsymbol{x} = \int_{\partial \Omega} \varphi \, \boldsymbol{f}\cdot \boldsymbol{n}\, dS\] by the Divergence Theorem. Rearranging this equation gives the desired result: \[\begin{equation} \label{eq:IP} \int_{\Omega} \varphi \, \mathrm{div} \boldsymbol{f}\, d \boldsymbol{x} = \int_{\partial \Omega} \varphi \, \boldsymbol{f}\cdot \boldsymbol{n}\, dS - \int_{\Omega} \nabla \varphi \cdot \boldsymbol{f}\, d \boldsymbol{x}. \end{equation}\]
By definition \[\mathrm{div} \, \nabla u = \sum_{i=1}^n \frac{\partial}{\partial x_i} (\nabla u)_i = \sum_{i=1}^n \frac{\partial}{\partial x_i} \frac{\partial u}{\partial x_i} = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2} = \Delta u.\]
Substituting \(\boldsymbol{f}=\nabla u\) into equation \(\eqref{eq:IP}\) and using that div\(\, \nabla u=\Delta u\) gives \[\int_{\Omega} \varphi \Delta u \, d \boldsymbol{x}= \int_{\partial \Omega} \varphi \, \nabla u \cdot \boldsymbol{n}\, dS - \int_{\Omega} \nabla u \cdot \nabla \varphi \, d \boldsymbol{x}\] as required.
Poisson’s equation with Neumann boundary conditions. Consider Poisson’s equation with Neumann boundary conditions: \[\begin{align} \label{eq:PDE} - \Delta u = f \quad & \textrm{in } \Omega, \\ \label{eq:BC} \nabla u \cdot \boldsymbol{n}= g \quad & \textrm{on } \partial \Omega. \end{align}\]
Suppose that there exists a function \(u\) satisfying \(\eqref{eq:PDE}\), \(\eqref{eq:BC}\). Then we can integrate \(\eqref{eq:PDE}\) to obtain \[\begin{aligned} - \int_{\Omega} \Delta u \, d \boldsymbol{x}= \int_{\Omega} f \, d \boldsymbol{x}\quad & \Longleftrightarrow \quad - \int_{\Omega} \textrm{div} \nabla u \, d \boldsymbol{x}= \int_{\Omega} f \, d \boldsymbol{x} \quad & \textrm{(since } \Delta u = \textrm{div} \nabla u\textrm{)} \\ \quad & \Longleftrightarrow \quad -\int_{\partial \Omega} \nabla u \cdot \boldsymbol{n}\, dS = \int_{\Omega} f \, d \boldsymbol{x} \quad & \textrm{(by the Divergence Theorem)} \\ \quad & \Longleftrightarrow \quad - \int_{\partial \Omega} g \, dS = \int_{\Omega} f \, d \boldsymbol{x} \quad & \textrm{(by equation \eqref{eq:BC})}. \end{aligned}\] Rearranging gives \[\begin{equation} \label{eq:cond} \int_{\Omega} f \, d \boldsymbol{x}+ \int_{\partial \Omega} g \, dS = 0 \end{equation}\] as required.
If \(u\) is a solution of \(\eqref{eq:PDE}\), \(\eqref{eq:BC}\), then so is \(u+c\) for any constant \(c \in \mathbb{R}\). Therefore if the PDE has a solution, it has infinitely many. On the other hand, if the data \(f\) and \(g\) are such that the necessary condition \(\eqref{eq:cond}\) is violated (take for instance \(f\equiv 1\) and \(g\equiv 0\)), then the problem does not have any solutions.
Implicit form of Burger’s equation. Clearly \[\begin{equation} \label{eq:u} u(x,t) = u_0(x-u(x,t)t) \end{equation}\] satisfies the initial condition \(u(x,0) = u_0(x)\) for all \(x \in \mathbb{R}\). We need to show that \(u_t + u u_x = 0\) for all \((x,t) \in \mathbb{R} \times (0,t_{\mathrm{c}})\). Differentiating equation \(\eqref{eq:u}\) yields \[\begin{align} \label{eq:ut} u_t & = -u_0'(u_t t + u), \\ \label{eq:ux} u_x & = \phantom{-} u_0'(1 - u_x t), \end{align}\] where we have omitted the argument \(x-u(x,t)t\) of \(u_0\). Therefore \[u_t + u u_x = -u_0'(u_t t + u) + u u_0'(1 - u_x t) = - u_0' t (u_t + u u_x).\] Rearranging gives \[\begin{equation} \label{eq:B} (u_t + u u_x)(1 + u_0' t) = 0. \end{equation}\] Either \(u_0'(x-u(x,t)t) \ge 0\), in which case \(1+u_0'(x-u(x,t)t)t \ge 1\) (since \(t>0\)). Or \(u_0'(x-u(x,t)t) < 0\), in which case \[1 + u_0'(x-u(x,t)t) t > 1 + u_0'(x-u(x,t)t) t_{\mathrm{c}} \ge 1 + u_0'(x-u(x,t)t) \frac{-1}{u_0'(x-u(x,t)t)} = 0\] by definition of \(t_{\mathrm{c}}\). In either case, \(1 + u_0' t > 0\), and hence equation \(\eqref{eq:B}\) implies that \(u_t + uu_x = 0\), as required.