Fibonacci sequence in the hiding

Daily-Life-Math-Facts-2

What ?!! There exists such an elegant decimal representation of the Fibonacci sequence? Well yes! and the only thing that you need to know to prove this is that if the Fibonacci numbers were the coefficients to a power series expansion, then the Fibonacci generating function is given as follows:

1x + 1x^{2} + 2x^{3} + 3x^{4} + 5x^{5} + \hdots = \frac{x}{1-x-x^{2}}

Subsituting the value of x  = \frac{1}{10} , we get :

\frac{1}{10} + \frac{1}{10}^{2} + 2(\frac{1}{10})^{3} + 3(\frac{1}{10})^{4} + 5(\frac{1}{10})^{5} + \hdots = \frac{\frac{1}{10}}{1-\frac{1}{10}-\frac{1}{10}^{2}}

0.1 + 0.01 + 0.002 + 0.0003 + 0.00005 + \hdots = \frac{10}{89}  

0.01 + 0.001 + 0.0002 + 0.00003 + 0.000005 + \hdots = \frac{1}{89}  

Proved. 😀

 

 

Advertisements

Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

Screenshot from 2017-06-07 00:19:11

\sum sin(\theta)d\theta =   Diameter of the circle/ The distance covered along the x axis starting from 0 and ending up at \pi.

And therefore by the same logic, it is extremely intuitive to see why:

\int\limits_{0}^{2\pi} sin/cos(x) dx = 0

Because if a dude starts at 0 and ends at 0/ 2\pi/ 4\pi \hdots, the effective distance that he covers is 0.

Circle_cos_sin.gif

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

 

nth roots of unity : A geometric approach

When one is dealing with complex numbers, it is many a times useful to think of them as transformations. The problem at hand is to find the nth roots of unity. i.e

z^n = 1

Multiplication as a Transformation

Multiplication in the complex plane is mere rotation and scaling. i.e

z_{1} = r_{1}e^{i\theta_{1}}, z_{2} = r_{2}e^{i\theta_{2}} 

z_{1}z_{2} = \underbrace{r_{1} r_{2}}_{scaling} \underbrace{e^{i(\theta_{1} + \theta_{2})}}_{rotation}

Now what does finding the n roots of unity mean?

If you start at 1 and perform n equal rotations( because multiplication is nothing but rotation + scaling ), you should again end up at 1.

We just need to find the complex numbers that do this.i.e

z^n = 1

\underbrace{zz \hdots z}_{n} = 1

z = re^{i\theta}

r^{n}e^{i(\theta + \theta + \hdots \theta)} = 1e^{2\pi k i}

r^{n}e^{in\theta} =1e^{2\pi k i}

This implies that :

\theta = \frac{2\pi k}{n}, r = 1

And therefore :

z = e^{\frac{2\pi k i}{n}}

Take a circle, slice it into n equal parts and voila you have your n roots of unity.

main-qimg-6da134e3e5735a9fb92355d53f95e4ed

Okay, but what does this imply ?

Multiplication by 1 is a 360^o/0^o rotation.

drawing

When you say that you are multiplying a positive real number(say 1) with 1 , we get a number(1) that is on the same positive real axis.

Multiplication by (-1) is a 180^o rotation.

drawing-1

When you multiply a positive real number (say 1) with -1, then we get a number (-1) that is on the negative real axis

The act of multiplying 1 by (-1) has resulted in a 180o transformation. And doing it again gets us back to 1.

Multiplication by i is a 90^o rotation.

drawing-2

Similarly multiplying by i takes 1 from real axis to the imaginary axis, which is a 90o rotation.

This applies to -i as well.

That’s about it! – That’s what the nth roots of unity mean geometrically. Have a good one!

 

Matrix Multiplication and Heisenberg Uncertainty Principle

We now understand that Matrix multiplication is not commutative (Why?). What has this have to do anything with Quantum Mechanics ?

Behold the commutator operator:
[\hat{A}, \hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A}

where \hat{A},\hat{B} are operators that are acting on the wavefunction \psi . This is equal to 0 if they commute and something else if they don’t.

One of the most important formulations in Quantum mechanics is the Heisenberg’s Uncertainty principle and it can be written as the commutation of the momentum operator (p) and the position operator (x):

[\hat{p}, \hat{x}] = \hat{p}\hat{x} - \hat{x}\hat{p} = i\hbar

If you think of p and x as some Linear transformations. (just for the sake of simplicity).

This means that measuring distance and then momentum is not the same thing as measuring momentum and then distance. Those two operators do not commute! You can sort of visualize them in the same way as in the post.

But in Quantum Mechanics, the matrices that are associated with \hat{p} and \hat{x} are infinite dimensional. ( The harmonic oscillator being the simple example to this )

\hat{x} = \sqrt{\frac{\hbar}{2m \omega}} \begin{bmatrix} 0 & \sqrt{1} & 0 & 0 & \hdots \\ \sqrt{1} &  0 &\sqrt{2} & 0 & \hdots \\ 0 & \sqrt{2} &  0 &\sqrt{3}  & \hdots \\  0 & 0 & \sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

\hat{p} = \sqrt{\frac{\hbar m \omega}{2}} \begin{bmatrix} 0 & -i & 0 & 0 & \hdots \\ i &  0 & -i \sqrt{2} & 0 & \hdots \\ 0 & i\sqrt{2} &  0 &\-i \sqrt{3}  & \hdots \\  0 & 0 & i\sqrt{3} &  0  & \hdots \\  \vdots & \vdots & \vdots & \vdots \end{bmatrix}

 

 

 

Beautiful proofs(#2): Euler’s Sum

1 + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \hdots = \frac{\pi^2}{6}

Say what? This one blew my mind when I first encountered it. But it turns out Euler was the one who came up with it and it’s proof is just beautiful!

Prerequisite
Say you have a quadratic equation f(x) whose roots are r_1,r_2 , then you can write f(x) as follows:

f(x) = (x-r_1)(x-r_2) =  0   (or)

f(x) = (r_1-x)(r_2-x) =  0   (or)

f(x) =  (1- \frac{x}{r_1})(1- \frac{x}{r_2}) =  0

f(x) = 1 - (\frac{1}{r_1} + \frac{1}{r_2}) + \frac{x^2}{r_1 r_2} = 0

As for as this proof is concerned we are only worried about the coefficient of x , which you can prove that for a n-degree polynomial is:

a_1 = - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})

where r_1,r_2 \hdots r_n are the n-roots of the polynomial.

 

Now begins the proof

It was known to Euler that

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots

But this could also be written in terms of the roots of the equation as:

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - (\frac{1}{r_1} + \frac{1}{r_2} + \hdots + + \frac{1}{r_n})y + \hdots

Now what are the roots of f(y) ?. Well, f(y) = 0 when \sqrt{y} = n \pi i.e y = n^2 \pi^2 *

The roots of the equation are y = \pi^2, 4 \pi^2, 9 \pi^2, \hdots

Therefore,

f(y) = \frac{sin(\sqrt{y})}{\sqrt{y}} = 1 - \frac{1}{3!}y + \hdots = 1 -( \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \hdots )y + \hdots

Comparing the coefficient of y on both sides of the equation we get that:

\frac{1}{6} = \frac{1}{\pi^2} + \frac{1}{4 \pi^2} + \frac{1}{ 9 \pi^2} + \hdots

\zeta(2) = \frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \hdots 

Q.E.D

* n=0 is not a root since
\frac{sin(\sqrt{y})}{\sqrt{y}} = 1 at y = 0

** Now if all that made sense but you are still thinking : Why on earth did Euler use this particular form of the polynomial for this problem, read the first three pages of this article. (It has to do with convergence)

Why on earth is matrix multiplication NOT commutative ? – Intuition

One is commonly asked to prove in college as part of a linear algebra problem set that matrix multiplication is not commutative. i.e If A and B are two matrices then :

AB \neq BA

But without getting into the Algebra part of it, why should this even be true ? Let’s use linear transformations to get a feel for it.

If A and B are two Linear Transformations namely Rotation and Shear. Then it means that.

(Rotation)(Shearing) \neq (Shearing)(Rotation)

Is that true? Well, lets perform these linear operations on a unit square and find out:

(Rotation)(Shearing)

2

(Shearing)(Rotation)

1

You can clearly see that the resultant shape is not the same upon the two transformations. This means that the order of matrix multiplication matters a lot ! ( or matrix multiplication is not commutative.)