Jackson’s Laplacian in spherical Coordinates

If you took a look at one of the previous posts on how to remember the Laplacian in different forms by using a metric,  you will notice that the form of  the Laplacian that we get is:

\nabla^2 \psi = \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial \psi}{\partial r} \right) + \frac{1}{r^2 \sin(\theta)} \frac{\partial}{\partial \theta} \left( sin(\theta)  \frac{\partial \psi}{\partial \theta} \right)   + \frac{1}{r^2 \sin^2(\theta)} \frac{\partial^2 \psi}{\partial \phi^2}   

But in Jackson’s Classical Electrodynamics, III edition he notes the following:

20190218_135719

This is an interesting form of the Laplacian that perhaps not everyone has encountered. This can obtained from the known form by making the substitution u = r \psi and simplifying. The steps to which have been outlined below:

20190218_135737_120190218_135743_1

 

 

Advertisement

Beautiful proofs (#4) – When Gauss was a young child…

The legend goes something like this:

Gauss’s teacher wanted to occupy his students by making them add large sets of numbers and told everyone in class to find the sum of 1+2+3+ …. + 100.

And Gauss, who was a young child (age ~ 10) quickly found the sum by just pairing up numbers:

Using this ingenious method used by Gauss allows us to write a generic formula for the sum of first n positive integers as follows:

Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

Screenshot from 2017-06-07 00:19:11

\sum sin(\theta)d\theta =   Diameter of the circle/ The distance covered along the x axis starting from 0 and ending up at \pi.

And therefore by the same logic, it is extremely intuitive to see why:

\int\limits_{0}^{2\pi} sin/cos(x) dx = 0

Because if a dude starts at 0 and ends at 0/ 2\pi/ 4\pi \hdots, the effective distance that he covers is 0.

Circle_cos_sin.gif

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

 

Beautiful proofs(#2): Euler’s Sum

1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \hdots = \frac{\pi^2}{6}

Say what? This one blew my mind when I first encountered it. But it turns out Euler was the one who came up with it and it’s proof is just beautiful!

Prerequisite
Say you have a quadratic equation f(x) whose roots are r_1,r_2 , then you can write f(x) as follows:

f(x) = (x-r_1)(x-r_2) =  0   (or)

f(x) = (r_1-x)(r_2-x) =  0   (or)

f(x) =  (1- \frac{x}{r_1})(1- \frac{x}{r_2}) =  0

f(x) = 1 - (\frac{1}{r_1} + \frac{1}{r_2}) + \frac{x^2}{r_1 r_2} = 0

As for as this proof is concerned we are only worried about the coefficient of x , which you can prove that for a n-degree polynomial is:

a_1 = - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})

where r_1,r_2 \hdots r_n are the n-roots of the polynomial.

 

Now begins the proof

It was known to Euler that

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots

But this could also be written in terms of the roots of the equation as:

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})y + \hdots

Now what are the roots of f(y) ?. Well, f(y) = 0 when \sqrt{y} = n \pi i.e y = n^2 \pi^2 *

The roots of the equation are y = \pi^2, 4 \pi^2, 9 \pi^2, \hdots

Therefore,

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots = 1 -( \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \hdots )y + \hdots

Comparing the coefficient of y on both sides of the equation we get that:

\frac{1}{6} = \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \frac{1}{ 9 \pi^2} + \hdots

\zeta(2) = \frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \hdots 

Q.E.D

* n=0 is not a root since
\frac{sin(\sqrt{y})}{\sqrt{y}} = 1 at y = 0

** Now if all that made sense but you are still thinking : Why on earth did Euler use this particular form of the polynomial for this problem, read the first three pages of this article. (It has to do with convergence)

Why on earth is matrix multiplication NOT commutative ? – Intuition

One is commonly asked to prove in college as part of a linear algebra problem set that matrix multiplication is not commutative. i.e If A and B are two matrices then :

AB \neq BA

But without getting into the Algebra part of it, why should this even be true ? Let’s use linear transformations to get a feel for it.

If A and B are two Linear Transformations namely Rotation and Shear. Then it means that.

(Rotation)(Shearing) \neq (Shearing)(Rotation)

Is that true? Well, lets perform these linear operations on a unit square and find out:

(Rotation)(Shearing)

2

(Shearing)(Rotation)

1

You can clearly see that the resultant shape is not the same upon the two transformations. This means that the order of matrix multiplication matters a lot ! ( or matrix multiplication is not commutative.)

Legendre Differential equation (#1) : A friendly introduction

In this series of posts about Legendre differential equation, I would like to de-construct the differential equation down to the very bones. The motivation for this series is to put all that I know about the LDE in one place and also maybe help someone as a result.

The Legendre differential equation is the following:

(1-x^2)y^{''} -2xy^{'} + l(l+1)y = 0

where y^{'} = \frac{dy}{dx} and y^{''} = \frac{d^{2}y}{dx}

We will find solutions for this differential equation using the power series expansion i.e
y = \sum\limits_{n=0}^{\infty} a_n x^n

y^{'} = \sum\limits_{n=0}^{\infty} na_n x^{n-1}

y^{''} = \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}

We will plug in these expressions for the derivatives into the differential equation.

l(l+1)y = l(l+1)\sum\limits_{n=0}^{\infty} a_n x^n – (i)

-2xy^{'} = -2\sum\limits_{n=0}^{\infty} na_n x^{n} – (ii)

(1-x^2)y^{''} = (1-x^2)\sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}

= \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2} - \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n} – (iii)

** Note: Begin

\sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}

Let’s take \lambda = n-2 .
As n -> 0. , \lambda -> -2.
As n -> \infty , \lambda -> \infty.

\sum\limits_{\lambda = -2}^{\infty} (\lambda+2)(\lambda+1)a_n x^{\lambda}

= 0 + 0 + \sum\limits_{\lambda = 0}^{\infty} (\lambda+2)(\lambda+1)a_n x^{\lambda}

Again performing a change of variables from \lambda to n.

= \sum\limits_{n= 0}^{\infty} (n+2)(n+1)a_n x^{n}

** Note: End

(iii) can now be written as follows.

\sum\limits_{n=0}^{\infty} x^n \left((n+1)(n+2)a_{n+2} - n(n-1)a_n \right)  – (iv)

(i)+(ii)+(iv).

\sum\limits_{n=0}^{\infty} x^n \left((n+2)(n+1)a_{n+2} - (l(l+1)-n(n+1))a_n \right)

x = 0 is a trivial solution and therefore we get the indicial equation:

(n+2)(n+1)a_{n+2} - (l(l+1)-n(n+1))a_n = 0

(n+2)(n+1)a_{n+2} = (l^2 - n^2 + l - n)a_n = 0

(n+2)(n+1)a_{n+2} = ((l-n)(l+n)+ l - n)a_n = 0

(n+2)(n+1)a_{n+2} = (l-n)(l+n+1)a_n = 0

We get the following recursion relation on the coefficients of the power series expansion.

a_{n+2} = a_n \frac{(l+n+1)(l-n)}{(n+1)(n+2)}

Next post: What do these coefficients mean ?

Beautiful proofs (#1) : Divergence of the harmonic series

The harmonic series are as follows:

image

And it has been known since as early as 1350 that this series diverges. Oresme’s proof to it is just so beautiful.

image
image

Now replace ever term in the bracket with the lowest term that is present in it. This will give a lower bound on S1.

image
image

Clearly the lower bound of S1 diverges and therefore S1 also diverges.
But it interesting to note that of divergence is incredibly small: 10 billion terms in the series only adds up to around 23.6 !

Beautiful proofs (#1) : Divergence of the harmonic series

The harmonic series are as follows:

\sum\limits_{n=1}^{\infty} \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \hdots

And it has been known since as early as 1350 that this series diverges. Oresme’s proof to it is just so beautiful.

S_1 = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \hdots

S_1 = 1 + \left(\frac{1}{2}\right) + \left(\frac{1}{3} + \frac{1}{4}\right) + \left(\frac{1}{5} + \frac{1}{6} +  \frac{1}{7} + \frac{1}{8} \right) \hdots

Now replace ever term in the bracket with the lowest term that is present in it. This will give a lower bound on S_1 .

S_1 > 1 + \left(\frac{1}{2}\right) + \left(\frac{1}{4} + \frac{1}{4}\right) + \left(\frac{1}{8} + \frac{1}{8} +  \frac{1}{8} + \frac{1}{8} \right) \hdots

S_1 > 1 + \left(\frac{1}{2}\right) + \left(\frac{1}{2}\right) + \left(\frac{1}{2}\right)  + \left(\frac{1}{2}\right)  + \hdots

Clearly the lower bound of S_1 diverges and therefore S_1 also diverges. 😀
But it interesting to note that of divergence is incredibly small: 10 billion terms in the series only adds up to around 23.6 !

Yet another day at the board !

This time its the geometric series formula. The simplicity and elegance of the derivation inspired me to post this here.

Have a great day!