## On the direction of the cross product of vectors

One of my math professors always told me:

Understand the concept and not the definition

A lot of times I have fallen into this pitfall where I seem to completely understand how to methodically do something without actually comprehending what it means. And only after several years after I first encountered the notion of cross products did I actually understand what they really meant. When I did, it was purely ecstatic!

## Why on earth is the direction of cross product orthogonal ? Like seriously…

I mean this is one of the burning questions regarding the cross product and yet for some reason, textbooks don’t get to the bottom of this. One way to think about this is :

It is modeling a real life scenario!!

The scenario being :

When you try to twist a screw (clockwise screws being the convention) inside a block in the clockwise direction like so, the nail moves down and vice versa.

i.e When you move from the screw from u to v, then the direction of the cross product denotes the direction the screw will move.

That’s why the direction of the cross product is orthogonal. It’s really that simple!

## Another perspective

Now that you get a physical feel for the direction of the cross product, there is another way of looking at the direction too:

Displacement is a vector. Velocity is a vector. Acceleration is a vector. As you might expect, angular displacement, angular velocity, and angular acceleration are all vectors, too.

But which way do they point ?

Let’s take a rolling tire. The velocity vector of every point in the tire is pointed in every other direction. BUT every point on a rolling tire has to have the same angular velocity – Magnitude and Direction.

How can we possibly assign a direction to the angular velocity ?

Well, the only way to ensure that the direction of the angular velocity is the same for every point is to make the direction of the angular velocity perpendicular to the plane of the tire.
Problem solved!

## Why is the area under one hump of a sine curve exactly 2?

I was talking with a student recently who told me that he always found the fact that $latex int_0^{pi} sin x , dx = 2$ amazing. “How is it that the area under one hump of the sine curve comes out exactly 2?” He asked me if there is an easy way to see that, or is it something you just have to discover by doing the computation.

View original post 162 more words

## Solving the Laplacian in Spherical Coordinates (#1)

In this post, let’s derive a general solution for the Laplacian in Spherical Coordinates. In future posts, we shall look at the application of this equation in the context of Fluids and Quantum Mechanics.

$x = rsin\theta cos\phi$
$y = rsin\theta cos\phi$
$z = rcos\theta$

where

$0 \leq r < \infty$
$0 \leq \theta \leq \pi$
$0 \leq \phi < 2\pi$

The Laplacian in Spherical coordinates in its ultimate glory is written as follows:

$\nabla ^{2}f ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \phi ^{2}}} = 0$

To solve it we use the method of separation of variables.

$f = R(r)\Theta(\theta)\Phi(\phi)$

Plugging in the value of $f$ into the Laplacian, we get that :

$\frac{\Theta \Phi}{r^2} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right) + \frac{R \Phi}{r^2 sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{\Theta R}{r^2 sin^2 \theta} \frac{d^2 \Phi}{d\phi^2} = 0$

Dividing throughout by $R\Theta\Phi$ and multiplying throughout by $r^2$, further simplifies into:

$\underbrace{ \frac{1}{R} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right)}_{h(r)} + \underbrace{\frac{1}{\Theta sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{1}{\Phi sin^2 \theta} \frac{d^2 \Phi}{d\phi^2}}_{g(\theta,\phi)} = 0$

It can be observed that the first expression in the differential equation is merely a function of $r$ and the remaining a function of $\theta$ and $\phi$ only. Therefore, we equate the first expression to be $\lambda = l(l+1)$ and the second to be $-\lambda = -l(l+1)$. The reason for choosing the peculiar value of $l(l+1)$ is explained in another post.

$\underbrace{ \frac{1}{R} \frac{d}{dr} \left( r^2\frac{dR}{dr} \right)}_{l(l+1)} + \underbrace{\frac{1}{\Theta sin \theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{d\theta} \right) + \frac{1}{\Phi sin^2 \theta} \frac{d^2 \Phi}{d\phi^2}}_{-l(l+1)} = 0$ (1)

The first expression in (1) the Euler-Cauchy equation in $r$.

$\frac{d}{dr} \left( r^2\frac{dR}{dr} \right) = l(l+1)R$

The general solution of this has been in discussed in a previous post and it can be written as:

$R(r) = C_1 r^l + \frac{C_2}{r^{l+1}}$

The second expression in (1) takes the form as follows:

$\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta + \frac{1}{\Phi} \frac{d^2 \Phi}{d\phi^2} = 0$

The following observation can be made similar to the previous analysis

$\underbrace{\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta }_{m^2} + \underbrace{\frac{1}{\Phi} \frac{d^2 \Phi}{d\phi^2}}_{-m^2} = 0$ (2)

The first expression in the above equation (2) is the Associated Legendre Differential equation.

$\frac{sin \theta}{\Theta} \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ l(l+1)sin^2 \theta = m^2$

$sin \theta \frac{d}{d \theta} \left( sin \theta \frac{d\Theta}{dr} \right)+ \Theta \left( l(l+1)sin^2 \theta - m^2 \right) = 0$

The general solution to this differential equation can be given as:
$\Theta(\theta) = C_3 P_l^m(cos\theta) + C_4 Q_l^m(cos\theta)$

The solution to the second term in the equation (2) is a trivial one:

$\frac{d^2 \Phi}{d\phi^2} = m^2 \Phi$
$\Phi(\phi) = C_5 e^{im\phi} + C_6 e^{-im\phi}$

Therefore the general solution to the Laplacian in Spherical coordinates is given by:

$R\Theta\Phi = \left(C_1 r^l + \frac{C_2}{r^{l+1}} \right) \left(C_3 P_l^m(cos\theta) + C_4 Q_l^m(cos\theta \right) \left(C_5 e^{im\phi} + C_6 e^{-im\phi}\right)$

## An infinite joke

$\frac{1}{0} = \infty$

Now flip this over by 90 degree counter clockwise :

$- 10 = 8$

$- 18 = 0$

Now flip this over again by 90 degree clockwise :

$\frac{1}{\infty} = 0$

## A strange operator

In a previous post on using the Feynman’s trick for Discrete calculus, I used a very strange operator ( $\triangledown$ ). And whose function is the following :

$\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}$

What is this operator? Well, to be quite frank I am not sure of the name, but I used it as an analogy to Integration. i.e

$\int x^{n} = \frac{x^{n+1}}{n+1} + C$

What are the properties of this operator ? Let’s use the known fact that $n^{\underline{k+1}} = (n-k) n^{\underline{k}}$

$\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}$

$\triangledown n^{\underline{k}} = \frac{(n-k) n^{\underline{k}}}{k+1}$

And applying the operator twice yields:

$\triangledown^2 n^{\underline{k}} = \frac{n^{\underline{k+2}}}{(k+1)(k+2)}$

$\triangledown^2 n^{\underline{k}} = \frac{(n-k-1) n^{\underline{k+1}}}{(k+1)(k+2)}$

$\triangledown^2 n^{\underline{k}} = \frac{(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)}$

We can clearly see a pattern emerging from this already, applying the operator once more :

$\triangledown^3 n^{\underline{k}} = \frac{(n-k-2)(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)(k+3)}$

$\vdots$

Or in general, the operator that has the characteristic prescribed in the previous post is the following:

$\triangledown^m n^{\underline{k}} = \frac{n^{\underline{k+m}}}{(k+m)^{\underline{m}}} n^{\underline{k}}$

If you guys are aware of the name of this operator, do ping me !

## Matrix Multiplication and Heisenberg Uncertainty Principle

We now understand that Matrix multiplication is not commutative (Why?). What has this have to do anything with Quantum Mechanics ?

Behold the commutator operator:
$[\hat{A}, \hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A}$

where $\hat{A},\hat{B}$ are operators that are acting on the wavefunction $\psi$. This is equal to 0 if they commute and something else if they don’t.

One of the most important formulations in Quantum mechanics is the Heisenberg’s Uncertainty principle and it can be written as the commutation of the momentum operator (p) and the position operator (x):

$[\hat{p}, \hat{x}] = \hat{p}\hat{x} - \hat{x}\hat{p} = i\hbar$

If you think of p and x as some Linear transformations. (just for the sake of simplicity).

This means that measuring distance and then momentum is not the same thing as measuring momentum and then distance. Those two operators do not commute! You can sort of visualize them in the same way as in the post.

But in Quantum Mechanics, the matrices that are associated with $\hat{p}$ and $\hat{x}$ are infinite dimensional. ( The harmonic oscillator being the simple example to this )

$\hat{x} = \sqrt{\frac{\hbar}{2m \omega}} \begin{bmatrix} 0 & \sqrt{1} & 0 & 0 & \hdots \\ \sqrt{1} & 0 &\sqrt{2} & 0 & \hdots \\ 0 & \sqrt{2} & 0 &\sqrt{3} & \hdots \\ 0 & 0 & \sqrt{3} & 0 & \hdots \\ \vdots & \vdots & \vdots & \vdots \end{bmatrix}$

$\hat{p} = \sqrt{\frac{\hbar m \omega}{2}} \begin{bmatrix} 0 & -i & 0 & 0 & \hdots \\ i & 0 & -i \sqrt{2} & 0 & \hdots \\ 0 & i\sqrt{2} & 0 &\-i \sqrt{3} & \hdots \\ 0 & 0 & i\sqrt{3} & 0 & \hdots \\ \vdots & \vdots & \vdots & \vdots \end{bmatrix}$

## Beautiful proofs(#2): Euler’s Sum

$1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \hdots = \frac{\pi^2}{6}$

Say what? This one blew my mind when I first encountered it. But it turns out Euler was the one who came up with it and it’s proof is just beautiful!

Prerequisite
Say you have a quadratic equation $f(x)$ whose roots are $r_1,r_2$, then you can write $f(x)$ as follows:

$f(x) = (x-r_1)(x-r_2) = 0$  (or)

$f(x) = (r_1-x)(r_2-x) = 0$  (or)

$f(x) = (1- \frac{x}{r_1})(1- \frac{x}{r_2}) = 0$

$f(x) = 1 - (\frac{1}{r_1} + \frac{1}{r_2}) + \frac{x^2}{r_1 r_2} = 0$

As for as this proof is concerned we are only worried about the coefficient of $x$, which you can prove that for a n-degree polynomial is:

$a_1 = - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})$

where $r_1,r_2 \hdots r_n$ are the n-roots of the polynomial.

Now begins the proof

It was known to Euler that

$f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots$

But this could also be written in terms of the roots of the equation as:

$f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})y + \hdots$

Now what are the roots of $f(y)$ ?. Well, $f(y) = 0$ when $\sqrt{y} = n \pi$ i.e $y = n^2 \pi^2$ *

The roots of the equation are $y = \pi^2, 4 \pi^2, 9 \pi^2, \hdots$

Therefore,

$f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots = 1 -( \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \hdots )y + \hdots$

Comparing the coefficient of y on both sides of the equation we get that:

$\frac{1}{6} = \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \frac{1}{ 9 \pi^2} + \hdots$

$\zeta(2) = \frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \hdots$

Q.E.D

* n=0 is not a root since
$\frac{sin(\sqrt{y})}{\sqrt{y}} = 1$ at y = 0

** Now if all that made sense but you are still thinking : Why on earth did Euler use this particular form of the polynomial for this problem, read the first three pages of this article. (It has to do with convergence)