How to photograph shock waves ?


This week NASA released the first-ever image of shock waves interacting between two supersonic aircraft. It’s a stunning effort, requiring a cutting-edge version of a century-old photographic technique and perfect coordination between three airplanes – the two supersonic Air Force T-38s and the NASA B-200 King Air that captured the image. The T-38s are flying in formation, roughly 30 ft apart, and the interaction of their shock waves is distinctly visible. The otherwise straight lines curve sharply near their intersections. 

Fully capturing this kind of behavior in ground-based tests or in computer simulation is incredibly difficult, and engineers will no doubt be studying and comparing every one of these images with those smaller-scale counterparts. NASA developed this system as part of their ongoing project for commercial supersonic technologies. (Image credit: NASA Armstrong; submitted by multiple readers)

How do these images get captured?

It may not obvious as to how this image was generated because if you have heard about Schlieren imaging what you have in your head is a setup that looks something like:


But how does Schelerin photography scale up to capturing moving objects in the sky?

Heat Haze

When viewing objects through the exhaust gases emanating from the nozzle of aircrafts, one can observe the image to be distorted.


Hot air is less dense than cold air.

And this creates a gradient in the refractive index of the air

Light gets bent/distorted


Method-01 : BOSCO ( Background-Oriented Schlieren using Celestial Objects )

You make the aircraft whose shock-wave that you would like to analyze pass across the sun in the sky.

You place a hydrogen alpha filter on your ground based telescope and observe this:


                  Notice the ripples that pass through the sunspots

The different air density caused by the aircraft bends the specific wavelength of light from the sun. This allows us to see the density gradient like the case of our heat wave above.

We can now calculate how far each “speckle” on the sun moved, and that gives us the following Schlieren image.

Method-02: Airborne Background Oriented Schlieren Technique

In the previous technique how far each speckle of the sun moved was used for imaging. BUT you can also use any textured background pattern in general.

An aircraft with camera flies above the flight like so:


The patterned ground now plays the role of the sun. Some versions of textures that are commonly are:


The difficulty in this method is the Image processing that follows after the images have been taken. 

And one of the main reasons why the image that NASA has released is spectacular because NASA seems to have nailed the underlying processing involved.

Have a great day!

* More on Heat hazes

** More on BOSCO

*** Images from the following paper : Airborne Application of the Background Oriented Schlieren Technique to a Helicopter in Forward Flight

**** This post obviously oversimplifies the technique. A lot of research goes into the processing of these images. But the motive of the post was to give you an idea of the method used to capture the image, the underlying science goes much deeper than this post.

Colors are nature’s way of expressing beauty. And we often find ourselves in this situation where we want to capture this ecstasy. A camera rose out of this innate longing to capture and invariably store these memories.


Generally when people are on the lookout for buying new phones/cameras, one of the parameters that is looked into is the MP(Megapixels) of the camera.

2.0 MP means that there are ~2million ‘effective’ pixels on the image that has been captured. *

But,what is a pixel ?

Pixel ( or picture element ) is a small element on the screen that represents a specific color. 

But how do you represent any color – with the primary color system of course!! Add the red, blue and green in varying proportions and voila! you can span the entire color spectrum. **


Therefore,every pixel is constituted of 3 ‘compartments’ – Red, Green and Blue to produce the necessary color distribution of an image.

The subtlety of a screen

Wait!! Hold on are you saying that there are millions of red, green and blue lights on my screen ?

Don’t believe me ? Take a took at these images of a smart phone screen under 30x and 60x magnification.


                  One RGB block is called a pixel. Video Source : Microworld


Now this ‘array type of arrangement’ is not necessarily the case with all manufacturers.

In fact, most manufacturers have their own unique type of representation ( see below )and the type varies with the type of application as well.


                                Photo credit: Peter Halasz. (User:Pengo)

If you have a tough time realizing how a set of RGB lights flashing on a screen is able to project a crisp image, then try this out:

Turn an excel sheet into an image

On the fundamental level, yes! it is merely a set of lights.

But once you start stacking a lot of these pixels next to one other in a grid ( 2 million of them for a 2.0 MP camera! ), you can start to see how a beautiful image emerges out.


Convert any image to a excel sheet here and explore !

To think that are millions of pixels on the screen rendering the plethora of images that I behold everyday BLOWS my mind out of proportions ;D
Have a great day!

Do more megapixels mean better picture quality ? Sort of but not always!

** What is additive color mixing ? Its not the same as you do with paint!