# Feynman’s trick applied to Contour Integration

A friend of mine was the TA for a graduate level  Math course for Physicists. And an exercise in that course was to solve  integrals using Contour Integration. Just for fun, I decided to mess with him by trying to solve all the contour integral problems in the prescribed textbook for the course [Arfken and Weber’s  ‘Mathematical methods for Physicists,7th edition”  exercise (11.8)] using anything BUT contour integration.

You can solve a lot of them them exclusively by using Feynman’s trick. ( If you would like to know about what the trick is – here is an introductory post) The following are my solutions:

All solutions in one pdf

Arfken-11.8.1

Arfken-11.8.2

Arfken-11.8.3

Arfken-11.8.4*

Arfken-11.8.5

Arfken-11.8.6 & 7 – not applicable

Arfken-11.8.8

Arfken-11.8.9

Arfken-11.8.10

Arfken-11.8.11

Arfken-11.8.12

Arfken-11.8.13

Arfken-11.8.14

Arfken-11.8.15

Arfken-11.8.16

Arfken-11.8.17

Arfken-11.8.18

Arfken-11.8.19

Arfken-11.8.20

Arfken-11.8.21 & Arfken-11.8.23* (Hint: Use 11.8.3)

Arfken-11.8.22

Arfken-11.8.24

Arfken-11.8.25*

Arfken-11.8.26

Arfken-11.8.27

Arfken-11.8.28

*I forgot how to solve these 4 problems without using Contour Integration. But I will update them when I remember how to do them. If you would like, you can take these to be challenge problems and if you solve them before I do send an email to 153armstrong(at)gmail.com and I will link the solution to your page. Cheers!

Once when lecturing in class Lord Kelvin used
the word ‘mathematician’ and then interrupting himself asked his class:
Do you know what a mathematician is?’

Stepping to his blackboard he
wrote upon it the above equation.

Then putting his finger on what he had written, he turned to
his class and said, ‘A mathematician is one to whom that is as obvious
as that twice two makes four is to you.

## ** Two interesting ways to arrive at the Gaussian Integral

Woah… The backlash that Lord Kelvin got after this post was just phenomenal.

There are many ways to obtain this integral (click here to know about other methods) , but here are two interesting ways to arrive at the Gaussian Integral which you may/may not have seen and may/may not be easy to follow.

## Gamma Function to the rescue

If you know about factorials (5!= 5.4.3.2.1), you know that they make sense only for integers.

But  Gamma function
extends this to non-integers values.  This integral form allows you to
calculate factorial values such as (½)!, (¾)! and so on.

The same can be used to evaluate the Gaussian Integral as follows:

## Differentiating under the Integral sign

In this technique known as ‘Differentiating under the integral sign’, you choose an integral whose boundary values are easy integrals to evaluate.

Here I(0) and I(∞); and differentiate with respect to a parameter β instead of the variable x to obtain the result.

# Beautiful Proofs(#2): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

I find this extremely beautiful because:

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

Have a great day!

Part – I –> Divergence of the Harmonic series

# Beautiful Proofs(#3): Area under a sine curve !

So, I read this post on the the area of the sine curve some time ago and in the bottom was this equally amazing comment :

$\sum sin(\theta)d\theta =$  Diameter of the circle/ The distance covered along the x axis starting from $0$ and ending up at $\pi$.

And therefore by the same logic, it is extremely intuitive to see why:

$\int\limits_{0}^{2\pi} sin/cos(x) dx = 0$

Because if a dude starts at $0$ and ends at $0/ 2\pi/ 4\pi \hdots$, the effective distance that he covers is 0.

If you still have trouble understanding, follow the blue point in the above gif and hopefully things become clearer.

# Why is the area under one hump of a sine curve exactly 2?

I was talking with a student recently who told me that he always found the fact that $latex int_0^{pi} sin x , dx = 2$ amazing. “How is it that the area under one hump of the sine curve comes out exactly 2?” He asked me if there is an easy way to see that, or is it something you just have to discover by doing the computation.

View original post 162 more words

# A strange operator

In a previous post on using the Feynman’s trick for Discrete calculus, I used a very strange operator ( $\triangledown$ ). And whose function is the following :

$\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}$

What is this operator? Well, to be quite frank I am not sure of the name, but I used it as an analogy to Integration. i.e

$\int x^{n} = \frac{x^{n+1}}{n+1} + C$

What are the properties of this operator ? Let’s use the known fact that $n^{\underline{k+1}} = (n-k) n^{\underline{k}}$

$\triangledown n^{\underline{k}} = \frac{n^{\underline{k+1}}}{k+1}$

$\triangledown n^{\underline{k}} = \frac{(n-k) n^{\underline{k}}}{k+1}$

And applying the operator twice yields:

$\triangledown^2 n^{\underline{k}} = \frac{n^{\underline{k+2}}}{(k+1)(k+2)}$

$\triangledown^2 n^{\underline{k}} = \frac{(n-k-1) n^{\underline{k+1}}}{(k+1)(k+2)}$

$\triangledown^2 n^{\underline{k}} = \frac{(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)}$

We can clearly see a pattern emerging from this already, applying the operator once more :

$\triangledown^3 n^{\underline{k}} = \frac{(n-k-2)(n-k-1)(n-k) n^{\underline{k}}}{(k+1)(k+2)(k+3)}$

$\vdots$

Or in general, the operator that has the characteristic prescribed in the previous post is the following:

$\triangledown^m n^{\underline{k}} = \frac{n^{\underline{k+m}}}{(k+m)^{\underline{m}}} n^{\underline{k}}$

If you guys are aware of the name of this operator, do ping me !

# Intro to Parametric Integration

Parametric integration is one such technique that once you are made
aware of it, you will never for the love of god forget it. It goes by many names : ‘Differentiation under the Integral sign’, ‘Feynman’s famous trick’ , ‘Parametric Integration’ and so on.

Let me
demonstrate :

Now this integral might seem familiar to you if you have taken a calculus course before and to evaluate it is rather simple as well.

Knowing this you can do lots of crazy stuff. Lets differentiate this
expression wrt to the parameter in the integral – s (Hence the name
parametric integration ). i.e

Look at that, by simple differentiation we have obtained the expression
for another integral. How cool is that! It gets even better.

Lets differentiate it once more:

.

.

.

If you keep on differentiating the expression n times, one gets this :

Now substituting the value of s to be 1, we obtain the following
integral expression for the factorial. This is known as the gamma
function.

There are lots of ways to derive the above expression for the gamma
function, but parametric integration is in my opinion the most subtle
way to arrive at it. 😀

# Inverse of an Infinite matrix

$\begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & \hdots \\ 0 & 0 & 2 & 0 & 0 & 0 & \hdots \\ 0 & 0 & 0 & 3 & 0 & 0 & \hdots \\ 0 & 0 & 0 & 0 & 4 & 0 & \hdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix}$

What is the inverse of the above matrix ? I would strongly suggest that you think about the above matrix and what its inverse would look like before you read through.

On the face of it, it is indeed startling to even think of an inverse of an infinite dimensional matrix. But the only reason why this matrix seems weird is because I have presented it out of context.

You see, the popular name of the matrix is the Differentiation Matrix and is commonly denoted as $D$.

The differentiation matrix is a beautiful matrix and we will discuss all about it in some other post, but in the this post lets talk about its inverse. The inverse of the differentiation matrix is ( as you might have guessed ) is the Integration Matrix ($I^*$)

$I^* = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & \hdots \\ 1 & 0 & 0 & 0 & 0 & 0 & \hdots \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & \hdots \\ 0 & 0 & \frac{1}{3} & 0 & 0 & 0 & \hdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix}$

And it can be easily verified that $DI^* = I$, where $I$ is the Identity matrix.

Lesson learned: Infinite dimensional matrices can have inverses. 😀

# On the beauty of Parametric Integration and the Gamma function

Parametric integration is one such technique that once you are made aware of it, you will never for the love of god forget it. Let me demonstrate :

Now this integral might seem familiar to many of you and to evaluate it is rather simple as well.

$\int\limits_0^{\infty} e^{-sx} dx = \frac{1}{s}$

Knowing this you can do lots of crazy stuff. Lets differentiate this expression wrt to the parameter in the integral – s (Hence the name parametric integration ). i.e

$\frac{d}{ds}\int\limits_0^{\infty} e^{-sx} dx = \frac{d}{ds}\left(\frac{1}{s}\right)$

$\int\limits_0^{\infty} x e^{-sx} dx = \frac{1}{s^2}$

Look at that, by simple differentiation we have obtained the expression for another integral. How cool is that! It gets even better.
Lets differentiate it once more:

$\int\limits_0^{\infty} x^2 e^{-sx} dx = \frac{2*1}{s^3}$

$\int\limits_0^{\infty} x^3 e^{-sx} dx = \frac{3*2*1}{s^4}$

$\vdots$

If you keep on differentiating the expression n times, one gets this :

$\int\limits_0^{\infty} x^n e^{-sx} dx = \frac{n!}{s^{n+1}}$

Now substituting the value of s to be 1, we obtain the following integral expression for the factorial. This is known as the gamma function.

$\int\limits_0^{\infty} x^n e^{-x} dx = n! = \Gamma(n+1)$

There are lots of ways to derive the above expression for the gamma function, but parametric integration is in my opinion the most subtle way to arrive at it. 😀