A strange operator

In a previous post on using the Feynman’s trick for Discrete calculus, I used a very strange operator ( \triangledown ). And whose function is the following :

\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}

What is this operator? Well, to be quite frank I am not sure of the name, but I used it as an analogy to Integration. i.e

\int x^{n} = \frac{x^{n+1}}{n+1} + C

What are the properties of this operator ? Let’s use the known fact that n^{\underline{k+1}} = (n-k) n^{\underline{k}}

\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}

\triangledown n^{\underline{k}} = \frac{(n-k) n^{\underline{k}}}{k+1}

And applying the operator twice yields:

\triangledown^2 n^{\underline{k}} = \frac{n^{\underline{k+2}}}{(k+1)(k+2)}

\triangledown^2 n^{\underline{k}} = \frac{(n-k-1) n^{\underline{k+1}}}{(k+1)(k+2)}

\triangledown^2 n^{\underline{k}} = \frac{(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)}

We can clearly see a pattern emerging from this already, applying the operator once more :

\triangledown^3 n^{\underline{k}} = \frac{(n-k-2)(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)(k+3)}

\vdots

Or in general, the operator that has the characteristic prescribed in the previous post is the following:

\triangledown^m n^{\underline{k}} = \frac{n^{\underline{k+m}}}{(k+m)^{\underline{m}}} n^{\underline{k}}

If you guys are aware of the name of this operator, do ping me !

Matrix Multiplication and Heisenberg Uncertainty Principle

We now understand that Matrix multiplication is not commutative (Why?). What has this have to do anything with Quantum Mechanics ?

Behold the commutator operator:
[\hat{A}, \hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A}

where \hat{A},\hat{B} are operators that are acting on the wavefunction \psi . This is equal to 0 if they commute and something else if they don’t.

One of the most important formulations in Quantum mechanics is the Heisenberg’s Uncertainty principle and it can be written as the commutation of the momentum operator (p) and the position operator (x):

[\hat{p}, \hat{x}] = \hat{p}\hat{x} - \hat{x}\hat{p} = i\hbar

If you think of p and x as some Linear transformations. (just for the sake of simplicity).

This means that measuring distance and then momentum is not the same thing as measuring momentum and then distance. Those two operators do not commute! You can sort of visualize them in the same way as in the post.

But in Quantum Mechanics, the matrices that are associated with \hat{p} and \hat{x} are infinite dimensional. ( The harmonic oscillator being the simple example to this )

\hat{x} = \sqrt{\frac{\hbar}{2m \omega}} \begin{bmatrix} 0 & \sqrt{1} & 0 & 0 & \hdots \\ \sqrt{1} &  0 &\sqrt{2} & 0 & \hdots \\ 0 & \sqrt{2} &  0 &\sqrt{3}  & \hdots \\  0 & 0 & \sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

\hat{p} = \sqrt{\frac{\hbar m \omega}{2}} \begin{bmatrix} 0 & -i & 0 & 0 & \hdots \\ i &  0 & -i \sqrt{2} & 0 & \hdots \\ 0 & i\sqrt{2} &  0 &\-i \sqrt{3}  & \hdots \\  0 & 0 & i\sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

 

 

 

Beautiful proofs(#2): Euler’s Sum

1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \hdots = \frac{\pi^2}{6}

Say what? This one blew my mind when I first encountered it. But it turns out Euler was the one who came up with it and it’s proof is just beautiful!

Prerequisite
Say you have a quadratic equation f(x) whose roots are r_1,r_2 , then you can write f(x) as follows:

f(x) = x^2 - (r_1 + r_2) x + r_1r_2

You can also divide throughout by r_1r_2 and arrive at this form:

f(x) = r_1r_2 \left( \frac{x^2}{r_1r_2} - (\frac{1}{r_1} + \frac{1}{r_2}) x + 1 \right)

As for as this proof is concerned we are only worried about the coefficient of x, which you can prove that for a n-degree polynomial is:

a_1 = - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})

where r_1,r_2 \hdots r_n are the n-roots of the polynomial.

 

Now begins the proof

It was known to Euler that

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots

But this could also be written in terms of the roots of the equation as:

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})y + \hdots

Now what are the roots of f(y) ?. Well, f(y) = 0 when \sqrt{y} = n \pi i.e y = n^2 \pi^2 *

The roots of the equation are y = \pi^2, 4 \pi^2, 9 \pi^2, \hdots

Therefore,

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots = 1 -( \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \hdots )y + \hdots

Equating the coefficient of y on both sides of the equation we get that:

\frac{1}{6} = \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \frac{1}{ 9 \pi^2} + \hdots

\frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \hdots = S_2

Q.E.D

* n=0 is not a root since
\frac{sin(\sqrt{y})}{\sqrt{y}} = 1 at y = 0

Why on earth is matrix multiplication NOT commutative ? – Intuition

One is commonly asked to prove in college as part of a linear algebra problem set that matrix multiplication is not commutative. i.e If A and B are two matrices then :

AB \neq BA

But without getting into the Algebra part of it, why should this even be true ? Let’s use linear transformations to get a feel for it.

If A and B are two Linear Transformations namely Rotation and Shear. Then it means that.

(Rotation)(Shearing) \neq (Shearing)(Rotation)

Is that true? Well, lets perform these linear operations on a unit square and find out:

(Rotation)(Shearing)

2

(Shearing)(Rotation)

1

You can clearly see that the resultant shape is not the same upon the two transformations. This means that the order of matrix multiplication matters a lot ! ( or matrix multiplication is not commutative.)

Basis Vectors are instructions !

Basis vectors are best thought of in the context of roads.

Imagine you are in a city – X which has only roads that are perpendicular to one another.

0003

You can reach any part of the city but the only constraint is that you need to move along these perpendicular roads to get there.

lon6gm

 

Now lets say you go to another city-Y which has a different structure of roads.

0001

In this case as well you can get from one part of the city to any other, but you have to travel these ‘Sheared cubic’ pathways to get there.

xgwzgl

Just like these roads determine how you move about in the city, Basis Vectors encode information on how you move about on a plane. What do I mean by that ?

The basis vector of City-X is given as:

screenshot-from-2017-02-19-021951

\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}

This to be read as – ” If you would like to move in City-X you can only do so by taking 1 step in the x-direction or 1 step in the y-direction ”

The basis vector of City-Y is given as:

screenshot-from-2017-02-19-021644

\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}

This to be read as – ” If you would like to move in City-Y you can only do so by taking 1 step in the x-direction or  1 step along the diagonal OB ”

 

Conclusion:

By having the knowledge about the Basis Vectors of any city, you can travel to any destination by merely scaling these basis vectors.

As an example, lets say need to get to the point (3,2), then in City-X,  you would take 2 steps in the x-direction and 3 steps in the y-direction

\begin{bmatrix} 3 \\  2 \end{bmatrix}  =  3* \begin{bmatrix} 1 \\  0 \end{bmatrix}  +  2 * \begin{bmatrix} 0 \\  1 \end{bmatrix}

And similarly in City-Y, you would take 1 step along the x -direction and 2 steps along the diagonal OB.

\begin{bmatrix} 3 \\  2 \end{bmatrix}  =  1* \begin{bmatrix} 1 \\  0 \end{bmatrix}  +  2 * \begin{bmatrix} 1 \\  1 \end{bmatrix}

Destination Arrived 😀

On the origins of Taylor/Maclaurin Series

Many a times it is not discussed as to How the Taylor/Maclaurin series came to be in its current form. This short snippet is all about it.

Let us assume that some function f(x) can be written as a power series expansion. i.e

f(x) =  a_0 + a_1 x + a_2 x^2 + \hdots .

We are left with the task of finding out the coefficients of the power series expansion.

Substitution x = 0, we obtain the value of a_0.

a_0 = f(0) .

Lets differentiate f(x) wrt x.

\frac{d}{dx} f(x) = a_1 + 2a_2 x + \hdots

Evaluating at x =0 , we get

\frac{d}{dx} f(0) = a_1

And likewise:

\frac{d^2}{dx^2} f(0) = 2.1.a_2 = 2! \space a_2

\frac{d^3}{dx^3} f(0) = 3.2.1.a_3 = 3! \space a_3

\vdots

\frac{d^n}{dx^n} f(0) = n.n-1...3.2.1.a_n = n! a_n

That’s it we have found all the coefficient values, the only thing left to do is to plug it back into the power series expression:

f(x) =  f(0) + \frac{d}{dx}f(0) \frac{x}{1!} + \frac{d^2}{dx^2}f(0) \frac{x^2}{2!} + \frac{d^3}{dx^3} f(0) \frac{x^3}{3!} \hdots .

The above series expanded about the point x = 0 is called as the ‘Maclaurin Series’. The same underlying principle can be extended for expanding about any other point as well i.e ‘Taylor Series’.

The generalized product rule ( Leibniz Formula )

If f and g are n-times differentiable functions, then :

(fg)^{'} = fg^{'} + gf^{'}

Now, we would like to find out a generalized expression for the n-th derivative of fg. In order to arrive at that formulation lets calculate a few derivatives to see whether we can find a pattern:

(fg)^{'} = fg^{'} + gf^{'}

(fg)^{''} = \left(fg^{'} + gf^{'}\right)^{'} = fg^{''} + 2 f^{'}g^{'} + gf^{''}

(fg)^{'''} = \left(fg^{''} + 2 f^{'}g^{'} + gf^{''} \right)^{'} = fg^{'''} + 3 f^{''}g^{'} + 3 f^{'}g^{''} + gf^{'''}

(fg)^{''''} = fg^{''''} + 4 f^{'''}g^{'} + 6f^{''}g^{''} + 4 f^{'}g^{'''} + gf^{''''}

\vdots

You must have noticed a pattern in the above expressions. The coefficients seem are the one in the binomial expansion of (x+y)^n

1280px-pascals_triangle_5-svg

Therefore we can write the expression for the n-derivative of fg as the following expression:

(fg)^n = \sum\limits_{i=0}^{n} \binom{n}{i} f^{(i)}g^{(n-i)}
where (i) means to differentiate i-times.

This is also known as Leibniz Formula.

** This plays an important role when we start discussing about the Associated Legendre Differential Equation.