Feynman’s trick of parametric integration applied to Laplace Transforms

Parametric Integration is an Integration technique that was popularized by Richard Feynman but was known since Leibinz’s times. But this technique rarely gets discussed beyond a niche set of problems mostly in graduate school in the context of Contour Integration.

A while ago, having become obsessed with this technique I wrote this note on applying it to Laplace transform problems  and it is now public for everyone to take a look.

( Link to notes on Google Drive ) 

I would be open to your suggestions, comments and improvements on it as well. Cheers!

Advertisement

A note on Wave-functions and Fourier Transforms

In quantum mechanics you can denote the wave-function in the position or the momentum basis. Written in the momentum basis, it would look something like:

|\psi(x)> = a_0 |p_0> + a_1 |p_1> + \hdots

|\psi(x)> = \sum\limits_{n} a_n |p_n>

But momentum is a continuous variable and it varies from -\infty to \infty .

unnamed

Therefore changing to the integral representation we get that:

|\psi(x)> = \int\limits_{-\infty}^{\infty} dp \ a_n(p) |p>

But a_n(p)  is just the projection of the momentum vector on the wavefunction:

|\psi(x)> = \int\limits_{-\infty}^{\infty} dp <p|\psi(x)>|p>

 

We are also aware from our knowledge of Fourier Transform* that the wave function written in momentum space is given as :

|\psi(x)> = \frac{1}{\sqrt{2 \pi \hbar}}  \int\limits_{-\infty}^{\infty} dp \  \tilde{\psi}(p) |e^{\frac{ipx}{\hbar}}>

Comparing both the above equations if we take the momentum basis as  |p> = e^{\frac{ipx}{\hbar}} , then:

<p|\psi(x)> = \frac{1}{\sqrt{2 \pi \hbar}} \tilde{\psi}(p)

We can perform a similar analysis by expanding the wavefunction about the position basis and get

<x| \tilde{\psi}(p)> = \frac{1}{\sqrt{2 \pi \hbar}} \psi(x)

 

** Where does the \frac{1}{\sqrt{2 \pi \hbar}} in the Fourier Transform come from ?

We know from Fourier transform is defined as follows:

|\psi(x)> = \frac{1}{\sqrt{2 \pi}}  \int\limits_{-\infty}^{\infty} dk \  \psi_{1}(k) |e^{ikx}>

Plugging in p = \hbar k and rewriting the above equation we get,

|\psi(x)> = A  \int\limits_{-\infty}^{\infty} dp \  \tilde{\psi}(p) |e^{\frac{ipx}{\hbar}}>

We find that from

\int\limits_{-\infty}^{\infty}  dx <\psi(x)| \psi(x)>  = 1

that the normalization constant is not \frac{1}{\sqrt{2 \pi}} but \frac{1}{\sqrt{2 \pi \hbar}} . Therefore,

|\psi(x)> = \frac{1}{\sqrt{2 \pi \hbar}}  \int\limits_{-\infty}^{\infty} dp \  \tilde{\psi}(p) |e^{\frac{ipx}{\hbar}}>

 

Cooking up a Lorentz invariant Lagrangian

Let’s consider a scalar field, say temperature of a rod varying with time i.e  T(x,t) . (something like the following)

w1hldkr

We will take this setup and put it on a really fast train moving at a constant velocity v (also known as performing a ‘Lorentz boost’).

train-animated-gif-3

Now the temperature of the bar in this new frame of reference is given by T^{'}(x^{'}, t^{'}) where,

x^{'}(x,t) = \gamma \left( x - vt \right)

t^{'}(x,t) = \gamma \left( t -   \frac{v x}{c^{2}} \right) 

lorentz_boost

Visualizing the temperature distribution of the rod under a Length contraction.

Temperature is a scalar field and therefore irrespective of which frame of reference you are on, the temperature at each point on the rod will remain the same on both the frames i.e

T^{'}(x^{'}, t^{'})= T(x, t)

Therefore we can say that Temperature (a scalar field) is Lorentz invariant. Now what other quantities can we make from T that would also be Lorentz invariant ?

Is \nabla . T^{'}(x^{'}, t^{'})= \nabla . T(x, t)  ?

Well, let’s give it a try:

\frac{\partial T}{\partial x}  = \frac{\partial T^{'}}{\partial x^{'}} \frac{\partial x^{'}}{\partial x} + \frac{\partial T}{\partial t^{'}} \frac{\partial t^{'}}{\partial x} 

\frac{\partial T}{\partial x}  = \frac{\partial T^{'}}{\partial x^{'}} \gamma - \frac{\partial T}{\partial t^{'}} \gamma v 

\frac{\partial T}{\partial x}  = \gamma \left(  \frac{\partial T^{'}}{\partial x^{'}}  - \frac{\partial T}{\partial t^{'}} v  \right) 

——–

\frac{\partial T}{\partial t}  = \frac{\partial T^{'}}{\partial x^{'}} \frac{\partial x^{'}}{\partial t} + \frac{\partial T^{'}}{\partial t^{'}} \frac{\partial t^{'}}{\partial t} 

\frac{\partial T}{\partial t}  = -  \frac{\partial T^{'}}{\partial x^{'}} \gamma v  + \frac{\partial T^{'}}{\partial t^{'}} \gamma 

\frac{\partial T}{\partial t}  =  \gamma \left( -  \frac{\partial T^{'}}{\partial x^{'}}  v  + \frac{\partial T^{'}}{\partial t^{'}}  \right)

Clearly, **

\frac{\partial T}{\partial x}  +  \frac{\partial T}{\partial t}  \neq   \frac{\partial T^{'}}{\partial x^{'}}  + \frac{\partial T^{'}}{\partial t^{'}}

But just for fun let’s just square the terms and see if we can churn something out of that:

\left( \frac{\partial T}{\partial x} \right)^{2}  = \gamma^{2}  \left(  \frac{\partial T^{'}}{\partial x^{'}}  - \frac{\partial T^{'}}{\partial t^{'}} v  \right)^{2} 

\left( \frac{\partial T}{\partial t} \right)^{2}  =  \gamma^{2} \left( -  \frac{\partial T^{'}}{\partial x^{'}}  v  + \frac{\partial T^{'}}{\partial t^{'}}  \right) ^{2}

We immediately notice that:

\left( \frac{\partial T}{\partial t} \right)^{2} - \left( \frac{\partial T}{\partial x} \right)^{2}  = \gamma^{2} \left[  \left( -  \frac{\partial T^{'}}{\partial x^{'}}  v  + \frac{\partial T^{'}}{\partial t^{'}}  \right) ^{2} - \left(  \frac{\partial T^{'}}{\partial x^{'}}  - \frac{\partial T^{'}}{\partial t^{'}} v  \right)^{2}  \right]

\left( \frac{\partial T}{\partial t} \right)^{2} - \left( \frac{\partial T}{\partial x} \right)^{2}  = \left( \frac{\partial T^{'}}{\partial t^{'}} \right)^{2} - \left( \frac{\partial T^{'}}{\partial x^{'}} \right)^{2} 

Therefore in addition to realizing that T is Lorentz invariant, we have also found another quantity that is also Lorentz invariant. This quantity is also written as \partial_{\mu} T \partial^{\mu} T .

** There is a very important reason why this quantity did not work out.  This post was inspired in part by Micheal Brown’s answer on stackexchange . I request the interested reader to check that post for a detailed explanation.

Ansatz to Gram-Schmidt Orthonormalization

The Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space and the trivial way to remember this is through an ansatz :

Let |v_{1}> , |v_{2}> , \hdots |v_{n}>    be a set of normalized basis vectors but we would also like to make them orthogonal.  We will call |v_{1}^{'}> , |v_{2}^{'}> , \hdots |v_{n}^{'}>  be the orthonormalized set of basis vectors formed out  |v_{1}> , |v_{2}> , \hdots |v_{n}>  .

Let’s start with the first vector:

|v_{1}^{'} > = |v_{1}> 

Now we construct a second vector |v_{2}^{'}> out of |v_{1}^{'}> and |v_{2}> :

|v_{2}^{'} > = |v_{2}> - \lambda |v_{1}^{'}>

But what must be true of |v_{2}^{'} > is that  |v_{1}^{'}> and |v_{2}^{'}> must be orthogonal i.e <v_{1}^{'}|v_{2}^{'}> = 0 .

<v_{1}^{'}|v_{2}^{'} > = <v_{1}^{'}|v_{2}> - \lambda <v_{1}^{'}|v_{1}^{'}>

0 = <v_{1}^{'}|v_{2}> - \lambda 

\lambda = <v_{1}^{'} | v_{2}>

Therefore we get the following expression for v_{2}^{'} ,

|v_{2}^{'} > = |v_{2}> -  <v_1^{'} | v_{2} >|v_{1}>

which upon normalization looks like so:

|v_{2}^{'} > = \frac{|v_{2}^{'} >}{<v_{2}^{'} |v_{2}^{'} > }

 

 

That might have seemed trivial geometrically, but this process can be generalized for any complete n-dimensional vector space. Let’s continue the Gram – Schmidt for the third vector by choosing |v_{3}^{'} > of the following form and generalizing this process:

|v_{3}^{'} > = |v_{3}> - \lambda_{1} |v_{1}^{'}> - \lambda_{2} |v_{2}^{'}>

The values for \lambda_{1} and \lambda_{1} are found out to be as:

\lambda_{1} =  <v_{1}^{'}|v_{3}>

\lambda_{2}  = <v_{2}^{'}|v_{3}>

Therefore we get,

|v_{3}^{'} > = |v_{3}> - <v_{1}^{'}|v_{3}>|v_{1}^{'}> - <v_{2}^{'}|v_{3}>|v_{2}^{'}> (or)

|v_{3}^{'} > = |v_{3}> -  \sum\limits_{j=1,2} <v_{j}^{'} | v_{3}> |v_{j}^{'}> 

|v_{3}^{'} > = \frac{|v_{2}^{'} >}{<v_{3}^{'} |v_{3}^{'} > }

 

Generalizing, we obtain:

|v_{i}^{'} > = |v_{i}> -  \sum\limits_{j=1,2,...,i-1} <v_{j}^{'} | v_{i}> |v_{j}^{'}> 

|v_{i}^{'} > = \frac{|v_{i}^{'} >}{<v_{i}^{'} |v_{i}^{'} > }

Now although you would never need to remember the above expression because you can derive it off the bat with the above procedure, it is essential to understand how it came out to be.

Cheers!

 

Example (to be added soon):

 

Using Complex numbers in Classical Mechanics

When one is solving problems on the two dimensional plane and you are using polar coordinates, it is always a challenge to remember what the velocity/acceleration in the radial and angular directions (v_r , v_{\theta}, a_r, a_{\theta} ) are. Here’s one failsafe way using complex numbers that made things really easy :

z = re^{i \theta}

\dot{z} = \dot{r}e^{i \theta} + ir\dot{\theta}e^{i \theta} = (\dot{r} + ir\dot{\theta} ) e^{i \theta}

From the above expression, we can obtain v_r = \dot{r} and v_{\theta} = r\dot{\theta}

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta} ) e^{i \theta}   + (\dot{r} + ir\dot{\theta} )i \dot{\theta} e^{i \theta} 

\ddot{z} =  (\ddot{r} + ir\ddot{\theta} + i\dot{r}\dot{\theta}  + i  \dot{r} \dot{\theta} - r\dot{\theta}\dot{\theta} )e^{i \theta} 

\ddot{z} =  (\ddot{r} - r(\dot{\theta})^2+ i(r\ddot{\theta} + 2\dot{r}\dot{\theta} ) )e^{i \theta} 

From this we can obtain a_r = \ddot{r} - r(\dot{\theta})^2 and a_{\theta} = (r\ddot{\theta} + 2\dot{r}\dot{\theta}) with absolute ease.

Something that I realized only after a mechanics course in college was done and dusted but nevertheless a really cool and interesting place where complex numbers come in handy!

 

 

Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

Screenshot from 2017-06-07 00:19:11

\sum sin(\theta)d\theta =   Diameter of the circle/ The distance covered along the x axis starting from 0 and ending up at \pi.

And therefore by the same logic, it is extremely intuitive to see why:

\int\limits_{0}^{2\pi} sin/cos(x) dx = 0

Because if a dude starts at 0 and ends at 0/ 2\pi/ 4\pi \hdots, the effective distance that he covers is 0.

Circle_cos_sin.gif

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

 

nth roots of unity : A geometric approach

When one is dealing with complex numbers, it is many a times useful to think of them as transformations. The problem at hand is to find the nth roots of unity. i.e

z^n = 1

Multiplication as a Transformation

Multiplication in the complex plane is mere rotation and scaling. i.e

z_{1} = r_{1}e^{i\theta_{1}}, z_{2} = r_{2}e^{i\theta_{2}} 

z_{1}z_{2} = \underbrace{r_{1} r_{2}}_{scaling} \underbrace{e^{i(\theta_{1} + \theta_{2})}}_{rotation}

Now what does finding the n roots of unity mean?

If you start at 1 and perform n equal rotations( because multiplication is nothing but rotation + scaling ), you should again end up at 1.

We just need to find the complex numbers that do this.i.e

z^n = 1

\underbrace{zz \hdots z}_{n} = 1

z = re^{i\theta}

r^{n}e^{i(\theta + \theta + \hdots \theta)} = 1e^{2\pi k i}

r^{n}e^{in\theta} =1e^{2\pi k i}

This implies that :

\theta = \frac{2\pi k}{n}, r = 1

And therefore :

z = e^{\frac{2\pi k i}{n}}

Take a circle, slice it into n equal parts and voila you have your n roots of unity.

main-qimg-6da134e3e5735a9fb92355d53f95e4ed

Okay, but what does this imply ?

Multiplication by 1 is a 360^o/0^o rotation.

drawing

When you say that you are multiplying a positive real number(say 1) with 1 , we get a number(1) that is on the same positive real axis.

Multiplication by (-1) is a 180^o rotation.

drawing-1

When you multiply a positive real number (say 1) with -1, then we get a number (-1) that is on the negative real axis

The act of multiplying 1 by (-1) has resulted in a 180o transformation. And doing it again gets us back to 1.

Multiplication by i is a 90^o rotation.

drawing-2

Similarly multiplying by i takes 1 from real axis to the imaginary axis, which is a 90o rotation.

This applies to -i as well.

That’s about it! – That’s what the nth roots of unity mean geometrically. Have a good one!

 

Why is the area under one hump of a sine curve exactly 2?

Girls' Angle

blog_073013_02

I was talking with a student recently who told me that he always found the fact that $latex int_0^{pi} sin x , dx = 2$ amazing. “How is it that the area under one hump of the sine curve comes out exactly 2?” He asked me if there is an easy way to see that, or is it something you just have to discover by doing the computation.

If you’ve wondered about this too, perhaps you’ll find the following of interest.

View original post 162 more words