Why is the area under one hump of a sine curve exactly 2?

Girls' Angle

blog_073013_02

I was talking with a student recently who told me that he always found the fact that $latex int_0^{pi} sin x , dx = 2$ amazing. “How is it that the area under one hump of the sine curve comes out exactly 2?” He asked me if there is an easy way to see that, or is it something you just have to discover by doing the computation.

If you’ve wondered about this too, perhaps you’ll find the following of interest.

View original post 162 more words

Solving the Laplacian in Spherical Coordinates (#1)

In this post, let’s derive a general solution for the Laplacian in Spherical Coordinates. In future posts, we shall look at the application of this equation in the context of Fluids and Quantum Mechanics.

sph_coor

x = rsin\theta cos\phi
y = rsin\theta cos\phi
z = rcos\theta

where

0 \leq r < \infty
0 \leq \theta \leq \pi
0 \leq \phi < 2\pi

The Laplacian in Spherical coordinates in its ultimate glory is written as follows:

\nabla ^{2}f ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \phi ^{2}}} = 0

To solve it we use the method of separation of variables.

f = R(r)\Theta(\theta)\Phi(\phi)

Plugging in the value of f into the Laplacian, we get that :

\frac{\Theta \Phi}{r^2} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right) + \frac{R \Phi}{r^2 sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{\Theta R}{r^2 sin^2 \theta} \frac{d^2 \Phi}{d\phi^2} = 0

Dividing throughout by R\Theta\Phi and multiplying throughout by r^2, further simplifies into:

\underbrace{ \frac{1}{R} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right)}_{h(r)} + \underbrace{\frac{1}{\Theta sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{1}{\Phi sin^2 \theta} \frac{d^2 \Phi}{d\phi^2}}_{g(\theta,\phi)} = 0

It can be observed that the first expression in the differential equation is merely a function of r and the remaining a function of \theta and \phi only. Therefore, we equate the first expression to be \lambda = l(l+1) and the second to be -\lambda = -l(l+1). The reason for choosing the peculiar value of l(l+1) is explained in another post.

\underbrace{ \frac{1}{R} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right)}_{l(l+1)} + \underbrace{\frac{1}{\Theta sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{1}{\Phi sin^2 \theta} \frac{d^2 \Phi}{d\phi^2}}_{-l(l+1)} = 0 (1)

 

The first expression in (1) the Euler-Cauchy equation in r.

\frac{d}{dr} \left( r^2\frac{dR}{dr} \right) = l(l+1)R

The general solution of this has been in discussed in a previous post and it can be written as:

R(r) = C_1 r^l + \frac{C_2}{r^{l+1}}

 

The second expression in (1) takes the form as follows:

\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta + \frac{1}{\Phi} \frac{d^2 \Phi}{d\phi^2} = 0

The following observation can be made similar to the previous analysis

\underbrace{\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta }_{m^2} + \underbrace{\frac{1}{\Phi} \frac{d^2 \Phi}{d\phi^2}}_{-m^2} = 0 (2)

 

The first expression in the above equation (2) is the Associated Legendre Differential equation.

\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta = m^2

sin \theta \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ \Theta \left( l(l+1)sin^2 \theta - m^2 \right) = 0

The general solution to this differential equation can be given as:
\Theta(\theta) = C_3 P_l^m(cos\theta) + C_4 Q_l^m(cos\theta)

 

The solution to the second term in the equation (2) is a trivial one:

\frac{d^2 \Phi}{d\phi^2} = m^2 \Phi
\Phi(\phi) = C_5 e^{im\phi} + C_6 e^{-im\phi}

 

Therefore the general solution to the Laplacian in Spherical coordinates is given by:

R\Theta\Phi = \left(C_1 r^l + \frac{C_2}{r^{l+1}} \right) \left(C_3 P_l^m(cos\theta) + C_4 Q_l^m(cos\theta \right) \left(C_5 e^{im\phi} + C_6 e^{-im\phi}\right)

A strange operator

In a previous post on using the Feynman’s trick for Discrete calculus, I used a very strange operator ( \triangledown ). And whose function is the following :

\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}

What is this operator? Well, to be quite frank I am not sure of the name, but I used it as an analogy to Integration. i.e

\int x^{n} = \frac{x^{n+1}}{n+1} + C

What are the properties of this operator ? Let’s use the known fact that n^{\underline{k+1}} = (n-k) n^{\underline{k}}

\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}

\triangledown n^{\underline{k}} = \frac{(n-k) n^{\underline{k}}}{k+1}

And applying the operator twice yields:

\triangledown^2 n^{\underline{k}} = \frac{n^{\underline{k+2}}}{(k+1)(k+2)}

\triangledown^2 n^{\underline{k}} = \frac{(n-k-1) n^{\underline{k+1}}}{(k+1)(k+2)}

\triangledown^2 n^{\underline{k}} = \frac{(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)}

We can clearly see a pattern emerging from this already, applying the operator once more :

\triangledown^3 n^{\underline{k}} = \frac{(n-k-2)(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)(k+3)}

\vdots

Or in general, the operator that has the characteristic prescribed in the previous post is the following:

\triangledown^m n^{\underline{k}} = \frac{n^{\underline{k+m}}}{(k+m)^{\underline{m}}} n^{\underline{k}}

If you guys are aware of the name of this operator, do ping me !

Matrix Multiplication and Heisenberg Uncertainty Principle

We now understand that Matrix multiplication is not commutative (Why?). What has this have to do anything with Quantum Mechanics ?

Behold the commutator operator:
[\hat{A}, \hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A}

where \hat{A},\hat{B} are operators that are acting on the wavefunction \psi . This is equal to 0 if they commute and something else if they don’t.

One of the most important formulations in Quantum mechanics is the Heisenberg’s Uncertainty principle and it can be written as the commutation of the momentum operator (p) and the position operator (x):

[\hat{p}, \hat{x}] = \hat{p}\hat{x} - \hat{x}\hat{p} = i\hbar

If you think of p and x as some Linear transformations. (just for the sake of simplicity).

This means that measuring distance and then momentum is not the same thing as measuring momentum and then distance. Those two operators do not commute! You can sort of visualize them in the same way as in the post.

But in Quantum Mechanics, the matrices that are associated with \hat{p} and \hat{x} are infinite dimensional. ( The harmonic oscillator being the simple example to this )

\hat{x} = \sqrt{\frac{\hbar}{2m \omega}} \begin{bmatrix} 0 & \sqrt{1} & 0 & 0 & \hdots \\ \sqrt{1} &  0 &\sqrt{2} & 0 & \hdots \\ 0 & \sqrt{2} &  0 &\sqrt{3}  & \hdots \\  0 & 0 & \sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

\hat{p} = \sqrt{\frac{\hbar m \omega}{2}} \begin{bmatrix} 0 & -i & 0 & 0 & \hdots \\ i &  0 & -i \sqrt{2} & 0 & \hdots \\ 0 & i\sqrt{2} &  0 &\-i \sqrt{3}  & \hdots \\  0 & 0 & i\sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

 

 

 

Beautiful proofs(#2): Euler’s Sum

1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \hdots = \frac{\pi^2}{6}

Say what? This one blew my mind when I first encountered it. But it turns out Euler was the one who came up with it and it’s proof is just beautiful!

Prerequisite
Say you have a quadratic equation f(x) whose roots are r_1,r_2 , then you can write f(x) as follows:

f(x) = x^2 - (r_1 + r_2) x + r_1r_2

You can also divide throughout by r_1r_2 and arrive at this form:

f(x) = r_1r_2 \left( \frac{x^2}{r_1r_2} - (\frac{1}{r_1} + \frac{1}{r_2}) x + 1 \right)

As for as this proof is concerned we are only worried about the coefficient of x, which you can prove that for a n-degree polynomial is:

a_1 = - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})

where r_1,r_2 \hdots r_n are the n-roots of the polynomial.

 

Now begins the proof

It was known to Euler that

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots

But this could also be written in terms of the roots of the equation as:

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})y + \hdots

Now what are the roots of f(y) ?. Well, f(y) = 0 when \sqrt{y} = n \pi i.e y = n^2 \pi^2 *

The roots of the equation are y = \pi^2, 4 \pi^2, 9 \pi^2, \hdots

Therefore,

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots = 1 -( \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \hdots )y + \hdots

Equating the coefficient of y on both sides of the equation we get that:

\frac{1}{6} = \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \frac{1}{ 9 \pi^2} + \hdots

\frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \hdots = S_2

Q.E.D

* n=0 is not a root since
\frac{sin(\sqrt{y})}{\sqrt{y}} = 1 at y = 0

Why on earth is matrix multiplication NOT commutative ? – Intuition

One is commonly asked to prove in college as part of a linear algebra problem set that matrix multiplication is not commutative. i.e If A and B are two matrices then :

AB \neq BA

But without getting into the Algebra part of it, why should this even be true ? Let’s use linear transformations to get a feel for it.

If A and B are two Linear Transformations namely Rotation and Shear. Then it means that.

(Rotation)(Shearing) \neq (Shearing)(Rotation)

Is that true? Well, lets perform these linear operations on a unit square and find out:

(Rotation)(Shearing)

2

(Shearing)(Rotation)

1

You can clearly see that the resultant shape is not the same upon the two transformations. This means that the order of matrix multiplication matters a lot ! ( or matrix multiplication is not commutative.)