Remembering the Laplacian in different coordinate systems

When one is learning about Laplace and Poisson equations it can be frustrating to remember its form in different coordinate systems.  But when one is introduced to four vectors, special relativity and so on, here is a simple way to remember the Laplacian in any coordinate system.

\nabla^2 \phi = \frac{1}{\sqrt{|g|}} \frac{\partial}{\partial x^{j}} \left(\sqrt{|g|} g^{ij}  \frac{\partial \phi}{\partial x^{j}} \right)

where we g^{ij} is the inverse of the metric g_{ij} , |g| is the determinant of the metric . And one specifies the coordinate system by mentioning the form of the metric. Let’s look at how this works out:

Cartesian Coordinates

x^{j} = (x,y,z)

g_{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & 1 & 0 \\ 0 & 0 & 1  \end{bmatrix}

g^{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & 1 & 0 \\ 0 & 0 & 1  \end{bmatrix}

|g_{ij}| = 1

\nabla^2 \psi = \frac{1}{1} \frac{\partial}{\partial x^{j}} \left(\sqrt{1} g^{ij}  \frac{\partial \psi}{\partial x^{j}} \right)  =  \frac{\partial^2 \psi}{\partial x^2} + \frac{\partial^2 \psi}{\partial y^2} + \frac{\partial^2 \psi}{\partial z^2}

 

Cylindrical Coordinates

x^{j} = (r,\phi,z)

g_{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & r^2 & 0 \\ 0 & 0 & 1  \end{bmatrix}

g^{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & \frac{1}{r^2} & 0 \\ 0 & 0 & 1  \end{bmatrix}

|g_{ij}| = r^2

\nabla^2 \psi = \frac{1}{r} \frac{\partial}{\partial x^{j}} \left(r g^{ij}  \frac{\partial \phi}{\partial x^{j}} \right)   

\nabla^2 \psi = \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial \psi}{\partial r} \right) + \frac{1}{r} \frac{\partial}{\partial \phi} \left( \frac{r}{r^2} \frac{\partial \psi}{\partial \phi} \right)   + \frac{1}{r} \frac{\partial}{\partial z} \left( r \frac{\partial \psi}{\partial z} \right)  

Noting that \frac{\partial r}{\partial \phi}  = 0 = \frac{\partial r}{\partial z}  because they are independent variables, we get

\nabla^2 \psi = \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial \psi}{\partial r} \right) + \frac{1}{r^2} \frac{\partial^2 \psi}{\partial \phi^2}   + \frac{\partial^2 \psi}{\partial z^2}   

 

Spherical Coordinates

x^{j} = (r,\phi,\theta)

g_{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & r^2 & 0 \\ 0 & 0 & r^2 \sin^2(\theta)  \end{bmatrix}

g^{ij} = \begin{bmatrix} 1  &  0 &  0 \\  0 & \frac{1}{r^2} & 0 \\ 0 & 0 & \frac{1}{r^2 \sin^2(\theta)}  \end{bmatrix}

|g_{ij}| = r^4  \sin^2(\theta)

Following the same approach as the Cylindrical and Cartesian coordinates, we get the following form for the Laplacian in Spherical coordinates,

\nabla^2 \psi = \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial \psi}{\partial r} \right) + \frac{1}{r^2 \sin(\theta)} \frac{\partial}{\partial \theta} \left( sin(\theta)  \frac{\partial \psi}{\partial \theta} \right)   + \frac{1}{r^2 \sin^2(\theta)} \frac{\partial^2 \psi}{\partial \phi^2}   

 

 

 

 

 

 

 

 

 

 

 

 

Advertisements

Feynman’s trick applied to Contour Integration

 

A friend of mine was the TA for a graduate level  Math course for Physicists. And an exercise in that course was to solve  integrals using Contour Integration. Just for fun, I decided to mess with him by trying to solve all the contour integral problems in the prescribed textbook for the course [Arfken and Weber’s  ‘Mathematical methods for Physicists,7th edition”  exercise (11.8)] using anything BUT contour integration.

You can solve a lot of them them exclusively by using Feynman’s trick. ( If you would like to know about what the trick is – here is an introductory post) The following are my solutions:

All solutions in one pdf

Arfken-11.8.1

Arfken-11.8.2

Arfken-11.8.3

Arfken-11.8.4*

Arfken-11.8.5

Arfken-11.8.6 & 7 – not applicable

Arfken-11.8.8

Arfken-11.8.9

Arfken-11.8.10

Arfken-11.8.11

Arfken-11.8.12

Arfken-11.8.13

Arfken-11.8.14

Arfken-11.8.15

Arfken-11.8.16

Arfken-11.8.17

Arfken-11.8.18

Arfken-11.8.19

Arfken-11.8.20

Arfken-11.8.21 & Arfken-11.8.23* (Hint: Use 11.8.3)

Arfken-11.8.22

Arfken-11.8.24

Arfken-11.8.25*

Arfken-11.8.26

Arfken-11.8.27

Arfken-11.8.28

 

*I forgot how to solve these 4 problems without using Contour Integration. But I will update them when I remember how to do them. If you would like, you can take these to be challenge problems and if you solve them before I do send an email to 153armstrong(at)gmail.com and I will link the solution to your page. Cheers!

Feynman’s trick of parametric integration applied to Laplace Transforms

Parametric Integration is an Integration technique that was popularized by Richard Feynman but was known since Leibinz’s times. But this technique rarely gets discussed beyond a niche set of problems mostly in graduate school in the context of Contour Integration.

A while ago, having become obsessed with this technique I wrote this note on applying it to Laplace transform problems  and it is now public for everyone to take a look.

( Link to notes on Google Drive ) 

I would be open to your suggestions, comments and improvements on it as well. Cheers!

 

 

 

Using Complex numbers in Classical Mechanics

When one is solving problems on the two dimensional plane and you are using polar coordinates, it is always a challenge to remember what the velocity/acceleration in the radial and angular directions (v_r , v_{\theta}, a_r, a_{\theta} ) are. Here’s one failsafe way using complex numbers that made things really easy :

z = re^{i \theta}

\dot{z} = \dot{r}e^{i \theta} + ir\dot{\theta}e^{i \theta} = (\dot{r} + ir\dot{\theta} ) e^{i \theta}

From the above expression, we can obtain v_r = \dot{r} and v_{\theta} = r\dot{\theta}

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta} ) e^{i \theta}   + (\dot{r} + ir\dot{\theta} )i \dot{\theta} e^{i \theta} 

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta}  + i  \dot{r} \dot{\theta} - r\dot{\theta}\dot{\theta} )e^{i \theta} 

\ddot{z} =  (\ddot{r} - r(\dot{\theta})^2+ i(r\ddot{\theta} + 2\dot{r}\dot{\theta} ) )e^{i \theta} 

From this we can obtain a_r = \ddot{r} - r(\dot{\theta})^2 and a_{\theta} = (r\ddot{\theta} + 2\dot{r}\dot{\theta}) with absolute ease.

Something that I realized only after a mechanics course in college was done and dusted but nevertheless a really cool and interesting place where complex numbers come in handy!

 

 

Prof.Ghrist at his best!

Screenshot from 2017-07-28 10:55:27

To understand why this is true, we must start with the Fundamental Theorem of Vector calculus. If F  is a conservative field ( i.e F = \nabla \phi  ), then

\int\limits_{A}^{B} F.dr = \int\limits_{A}^{B} \nabla\phi .dr = \phi_{A} - \phi_{B}

What this means is that the value is dependent only on the initial and final positions. The path that you take to get from A to B is not important.

Screenshot from 2017-07-28 11:29:46

Now if the path of integration is a closed loop, then points A and B are the same, and therefore:

\int\limits_{A}^{A} F.dr = \int\limits_{A}^{A} \nabla\phi .dr = \phi_{1} - \phi_{1} = 0

Now that we are clear about this, according to Stokes theorem the same integral for a closed region can be represented in another form:

\int_{C} F.dr = \int\int_{A} (\nabla X F) .\vec{n} dA  = 0

From this we get that Curl = \nabla X F = 0 for a conservative field (i.e F = \nabla \phi ). Therefore when a conservative field is operated on by a curl operator (\nabla X ), it yields 0.

Bravo Prof.Ghrist! Beautifully said 😀

 

Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

Screenshot from 2017-06-07 00:19:11

\sum sin(\theta)d\theta =   Diameter of the circle/ The distance covered along the x axis starting from 0 and ending up at \pi.

And therefore by the same logic, it is extremely intuitive to see why:

\int\limits_{0}^{2\pi} sin/cos(x) dx = 0

Because if a dude starts at 0 and ends at 0/ 2\pi/ 4\pi \hdots, the effective distance that he covers is 0.

Circle_cos_sin.gif

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

 

Tricks that I wish I knew in High School : Trigonometry (#1)

I really wish that in High School the math curriculum would dig a little deeper into Complex Numbers because frankly Algebra in the Real Domain is not that elegant as it is in the Complex Domain.

To illustrate this let’s consider this dreaded formula that is often asked to be proved/ used in some other problems:

cos(nx)cos(mx) =  ?

Now in the complex domain:

cos(x) = \frac{e^{ix} + e^{-ix}}{2}

And therefore:

cos(mx) = \frac{e^{imx} + e^{-imx}}{2}

cos(nx) = \frac{e^{inx} + e^{-inx}}{2}

cos(mx)cos(nx)  = \left( \frac{e^{imx} + e^{-imx}}{2} \right) \left(  \frac{e^{inx} + e^{-inx}}{2} \right)

cos(mx)cos(nx)  = \frac{1}{4} \left( e^{i(m+n)x} + e^{-i(m+n)x} + e^{i(m-n)x} + e^{-i(m-n)x}   \right)

cos(mx)cos(nx)  = \frac{1}{2} \left( \left( \frac{e^{i(m+n)x} + e^{-i(m+n)x}}{2} \right) + \left( \frac{e^{i(m-n)x} + e^{-i(m-n)x}}{2} \right)   \right)

cos(mx)cos(nx)  = \frac{1}{2} \left( cos(m+n)x + cos(m-n)x   \right)
And similarly for its variants like cos(mx)sin(nx) and sin(mx)sin(nx) as well.

****

Now if you are in High School, that’s probably all that you will see. But if you have college friends and you took a peak what they rambled about in their notebooks, then you might this expression (for m \neq n):

I =  \int\limits_{-\pi}^{\pi} cos(mx)cos(nx) dx \\

But you as a high schooler already know a formula for this expression:

I =  \int\limits_{-\pi}^{\pi} \left( cos(m+n)x + cos(m-n)x   \right)dx \\

I =  \int\limits_{-\pi}^{\pi} cos(\lambda_1 x) dx + \int\limits_{-\pi}^{\pi} cos(\lambda_2 x) dx \\ 

where \lambda_1, \lambda_2 are merely some numbers. Now you plot some of these values for lambda i.e (\lambda = 1,2, \hdots) and notice that since integration is the area under the curve, the areas cancel out for any real number. drawing.png

and so on….. Therefore:

I =  \int\limits_{-\pi}^{\pi} cos(mx)cos(nx)dx = 0

This is an important result from the view point of Fourier Series!