Jackson’s Laplacian in spherical Coordinates

If you took a look at one of the previous posts on how to remember the Laplacian in different forms by using a metric,  you will notice that the form of  the Laplacian that we get is:

\nabla^2 \psi = \frac{1}{r^2} \frac{\partial}{\partial r} \left( r^2 \frac{\partial \psi}{\partial r} \right) + \frac{1}{r^2 \sin(\theta)} \frac{\partial}{\partial \theta} \left( sin(\theta)  \frac{\partial \psi}{\partial \theta} \right)   + \frac{1}{r^2 \sin^2(\theta)} \frac{\partial^2 \psi}{\partial \phi^2}   

But in Jackson’s Classical Electrodynamics, III edition he notes the following:

20190218_135719

This is an interesting form of the Laplacian that perhaps not everyone has encountered. This can obtained from the known form by making the substitution u = r \psi and simplifying. The steps to which have been outlined below:

20190218_135737_120190218_135743_1

 

 

Advertisement

Feynman’s trick of parametric integration applied to Laplace Transforms

Parametric Integration is an Integration technique that was popularized by Richard Feynman but was known since Leibinz’s times. But this technique rarely gets discussed beyond a niche set of problems mostly in graduate school in the context of Contour Integration.

A while ago, having become obsessed with this technique I wrote this note on applying it to Laplace transform problems  and it is now public for everyone to take a look.

( Link to notes on Google Drive ) 

I would be open to your suggestions, comments and improvements on it as well. Cheers!

Ansatz to Gram-Schmidt Orthonormalization

The Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space and the trivial way to remember this is through an ansatz :

Let |v_{1}> , |v_{2}> , \hdots |v_{n}>    be a set of normalized basis vectors but we would also like to make them orthogonal.  We will call |v_{1}^{'}> , |v_{2}^{'}> , \hdots |v_{n}^{'}>  be the orthonormalized set of basis vectors formed out  |v_{1}> , |v_{2}> , \hdots |v_{n}>  .

Let’s start with the first vector:

|v_{1}^{'} > = |v_{1}> 

Now we construct a second vector |v_{2}^{'}> out of |v_{1}^{'}> and |v_{2}> :

|v_{2}^{'} > = |v_{2}> - \lambda |v_{1}^{'}>

But what must be true of |v_{2}^{'} > is that  |v_{1}^{'}> and |v_{2}^{'}> must be orthogonal i.e <v_{1}^{'}|v_{2}^{'}> = 0 .

<v_{1}^{'}|v_{2}^{'} > = <v_{1}^{'}|v_{2}> - \lambda <v_{1}^{'}|v_{1}^{'}>

0 = <v_{1}^{'}|v_{2}> - \lambda 

\lambda = <v_{1}^{'} | v_{2}>

Therefore we get the following expression for v_{2}^{'} ,

|v_{2}^{'} > = |v_{2}> -  <v_1^{'} | v_{2} >|v_{1}>

which upon normalization looks like so:

|v_{2}^{'} > = \frac{|v_{2}^{'} >}{<v_{2}^{'} |v_{2}^{'} > }

 

 

That might have seemed trivial geometrically, but this process can be generalized for any complete n-dimensional vector space. Let’s continue the Gram – Schmidt for the third vector by choosing |v_{3}^{'} > of the following form and generalizing this process:

|v_{3}^{'} > = |v_{3}> - \lambda_{1} |v_{1}^{'}> - \lambda_{2} |v_{2}^{'}>

The values for \lambda_{1} and \lambda_{1} are found out to be as:

\lambda_{1} =  <v_{1}^{'}|v_{3}>

\lambda_{2}  = <v_{2}^{'}|v_{3}>

Therefore we get,

|v_{3}^{'} > = |v_{3}> - <v_{1}^{'}|v_{3}>|v_{1}^{'}> - <v_{2}^{'}|v_{3}>|v_{2}^{'}> (or)

|v_{3}^{'} > = |v_{3}> -  \sum\limits_{j=1,2} <v_{j}^{'} | v_{3}> |v_{j}^{'}> 

|v_{3}^{'} > = \frac{|v_{2}^{'} >}{<v_{3}^{'} |v_{3}^{'} > }

 

Generalizing, we obtain:

|v_{i}^{'} > = |v_{i}> -  \sum\limits_{j=1,2,...,i-1} <v_{j}^{'} | v_{i}> |v_{j}^{'}> 

|v_{i}^{'} > = \frac{|v_{i}^{'} >}{<v_{i}^{'} |v_{i}^{'} > }

Now although you would never need to remember the above expression because you can derive it off the bat with the above procedure, it is essential to understand how it came out to be.

Cheers!

 

Example (to be added soon):

 

Using Complex numbers in Classical Mechanics

When one is solving problems on the two dimensional plane and you are using polar coordinates, it is always a challenge to remember what the velocity/acceleration in the radial and angular directions (v_r , v_{\theta}, a_r, a_{\theta} ) are. Here’s one failsafe way using complex numbers that made things really easy :

z = re^{i \theta}

\dot{z} = \dot{r}e^{i \theta} + ir\dot{\theta}e^{i \theta} = (\dot{r} + ir\dot{\theta} ) e^{i \theta}

From the above expression, we can obtain v_r = \dot{r} and v_{\theta} = r\dot{\theta}

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta} ) e^{i \theta}   + (\dot{r} + ir\dot{\theta} )i \dot{\theta} e^{i \theta} 

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta}  + i  \dot{r} \dot{\theta} - r\dot{\theta}\dot{\theta} )e^{i \theta} 

\ddot{z} =  (\ddot{r} - r(\dot{\theta})^2+ i(r\ddot{\theta} + 2\dot{r}\dot{\theta} ) )e^{i \theta} 

From this we can obtain a_r = \ddot{r} - r(\dot{\theta})^2 and a_{\theta} = (r\ddot{\theta} + 2\dot{r}\dot{\theta}) with absolute ease.

Something that I realized only after a mechanics course in college was done and dusted but nevertheless a really cool and interesting place where complex numbers come in handy!

 

 

Prof.Ghrist at his best!

Screenshot from 2017-07-28 10:55:27

To understand why this is true, we must start with the Fundamental Theorem of Vector calculus. If F  is a conservative field ( i.e F = \nabla \phi  ), then

\int\limits_{A}^{B} F.dr = \int\limits_{A}^{B} \nabla\phi .dr = \phi_{A} - \phi_{B}

What this means is that the value is dependent only on the initial and final positions. The path that you take to get from A to B is not important.

Screenshot from 2017-07-28 11:29:46

Now if the path of integration is a closed loop, then points A and B are the same, and therefore:

\int\limits_{A}^{A} F.dr = \int\limits_{A}^{A} \nabla\phi .dr = \phi_{1} - \phi_{1} = 0

Now that we are clear about this, according to Stokes theorem the same integral for a closed region can be represented in another form:

\int_{C} F.dr = \int\int_{A} (\nabla X F) .\vec{n} dA  = 0

From this we get that Curl = \nabla X F = 0 for a conservative field (i.e F = \nabla \phi ). Therefore when a conservative field is operated on by a curl operator (\nabla X ), it yields 0.

Bravo Prof.Ghrist! Beautifully said 😀

 

Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

Screenshot from 2017-06-07 00:19:11

\sum sin(\theta)d\theta =   Diameter of the circle/ The distance covered along the x axis starting from 0 and ending up at \pi.

And therefore by the same logic, it is extremely intuitive to see why:

\int\limits_{0}^{2\pi} sin/cos(x) dx = 0

Because if a dude starts at 0 and ends at 0/ 2\pi/ 4\pi \hdots, the effective distance that he covers is 0.

Circle_cos_sin.gif

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

 

nth roots of unity : A geometric approach

When one is dealing with complex numbers, it is many a times useful to think of them as transformations. The problem at hand is to find the nth roots of unity. i.e

z^n = 1

Multiplication as a Transformation

Multiplication in the complex plane is mere rotation and scaling. i.e

z_{1} = r_{1}e^{i\theta_{1}}, z_{2} = r_{2}e^{i\theta_{2}} 

z_{1}z_{2} = \underbrace{r_{1} r_{2}}_{scaling} \underbrace{e^{i(\theta_{1} + \theta_{2})}}_{rotation}

Now what does finding the n roots of unity mean?

If you start at 1 and perform n equal rotations( because multiplication is nothing but rotation + scaling ), you should again end up at 1.

We just need to find the complex numbers that do this.i.e

z^n = 1

\underbrace{zz \hdots z}_{n} = 1

z = re^{i\theta}

r^{n}e^{i(\theta + \theta + \hdots \theta)} = 1e^{2\pi k i}

r^{n}e^{in\theta} =1e^{2\pi k i}

This implies that :

\theta = \frac{2\pi k}{n}, r = 1

And therefore :

z = e^{\frac{2\pi k i}{n}}

Take a circle, slice it into n equal parts and voila you have your n roots of unity.

main-qimg-6da134e3e5735a9fb92355d53f95e4ed

Okay, but what does this imply ?

Multiplication by 1 is a 360^o/0^o rotation.

drawing

When you say that you are multiplying a positive real number(say 1) with 1 , we get a number(1) that is on the same positive real axis.

Multiplication by (-1) is a 180^o rotation.

drawing-1

When you multiply a positive real number (say 1) with -1, then we get a number (-1) that is on the negative real axis

The act of multiplying 1 by (-1) has resulted in a 180o transformation. And doing it again gets us back to 1.

Multiplication by i is a 90^o rotation.

drawing-2

Similarly multiplying by i takes 1 from real axis to the imaginary axis, which is a 90o rotation.

This applies to -i as well.

That’s about it! – That’s what the nth roots of unity mean geometrically. Have a good one!

 

Tricks that I wish I knew in High School : Trigonometry (#1)

I really wish that in High School the math curriculum would dig a little deeper into Complex Numbers because frankly Algebra in the Real Domain is not that elegant as it is in the Complex Domain.

To illustrate this let’s consider this dreaded formula that is often asked to be proved/ used in some other problems:

cos(nx)cos(mx) =  ?

Now in the complex domain:

cos(x) = \frac{e^{ix} + e^{-ix}}{2}

And therefore:

cos(mx) = \frac{e^{imx} + e^{-imx}}{2}

cos(nx) = \frac{e^{inx} + e^{-inx}}{2}

cos(mx)cos(nx)  = \left( \frac{e^{imx} + e^{-imx}}{2} \right) \left(  \frac{e^{inx} + e^{-inx}}{2} \right)

cos(mx)cos(nx)  = \frac{1}{4} \left( e^{i(m+n)x} + e^{-i(m+n)x} + e^{i(m-n)x} + e^{-i(m-n)x}   \right)

cos(mx)cos(nx)  = \frac{1}{2} \left( \left( \frac{e^{i(m+n)x} + e^{-i(m+n)x}}{2} \right) + \left( \frac{e^{i(m-n)x} + e^{-i(m-n)x}}{2} \right)   \right)

cos(mx)cos(nx)  = \frac{1}{2} \left( cos(m+n)x + cos(m-n)x   \right)
And similarly for its variants like cos(mx)sin(nx) and sin(mx)sin(nx) as well.

****

Now if you are in High School, that’s probably all that you will see. But if you have college friends and you took a peak what they rambled about in their notebooks, then you might this expression (for m \neq n):

I =  \int\limits_{-\pi}^{\pi} cos(mx)cos(nx) dx \\

But you as a high schooler already know a formula for this expression:

I =  \int\limits_{-\pi}^{\pi} \left( cos(m+n)x + cos(m-n)x   \right)dx \\

I =  \int\limits_{-\pi}^{\pi} cos(\lambda_1 x) dx + \int\limits_{-\pi}^{\pi} cos(\lambda_2 x) dx \\ 

where \lambda_1, \lambda_2 are merely some numbers. Now you plot some of these values for lambda i.e (\lambda = 1,2, \hdots) and notice that since integration is the area under the curve, the areas cancel out for any real number. drawing.png

and so on….. Therefore:

I =  \int\limits_{-\pi}^{\pi} cos(mx)cos(nx)dx = 0

This is an important result from the view point of Fourier Series!

Why is the area under one hump of a sine curve exactly 2?

Girls' Angle

blog_073013_02

I was talking with a student recently who told me that he always found the fact that $latex int_0^{pi} sin x , dx = 2$ amazing. “How is it that the area under one hump of the sine curve comes out exactly 2?” He asked me if there is an easy way to see that, or is it something you just have to discover by doing the computation.

If you’ve wondered about this too, perhaps you’ll find the following of interest.

View original post 162 more words