— Digital Cameras
DARPA developing personal LWIR cameras to give soldiers heat vision
The five-micron LWIR camera being developed by DARPA to provide individual soldiers with thermal imaging capabilities
With their ability to pick out humans by their heat signatures, long-wave infrared (LWIR) thermal imaging cameras are a valuable asset for soldiers – and alien predators. Unfortunately, non-alien built ones are expensive and so large that they need to be mounted on vehicles. In an effort to make a LWIR camera cheap and small enough for an individual soldier to carry, DARPA is working on a five-micron camera that offers a reduced size without sacrificing performance.
Developed in association with DRS Technologies, Inc., the five-micron LIWR camera uses a 1280 x 720 focal plane array (FPA), which is a relatively high resolution for an infrared camera. Yet it’s smaller than conventional cameras because, at five microns across, each pixel is about one twelfth the width of a human hair and around one sixth the size of current state-of-the-art devices. In a first for an infrared camera, the pixels are also about half the size of the photons it detects.
The approach it takes is similar to that of a phone camera, which also uses smaller pixels to provide higher density in a compact package. By using smaller pixels, more can be placed on a single chip while maintaining the same level of sensitivity, resolution and field of view. And since the cost of FPAs is proportional to chip area, they are also cheaper.
According to Nibir Dhar, DARPA Program Manager, “DRS built three fully functional prototypes as part of this DARPA work. The cameras have been tested for various applications, including peering through particles in the air, which would be useful for helicopters landing in brownout conditions. We have found that the image is crisp and the performance of these FPAs is comparable to those with much larger pixel sizes.”
The LWIR camera was developed under the Advanced Wide FOV Architectures for Image Reconstruction and Exploitation (AWARE) program, which is also responsible for a gigapixel camera prototype.
About the Author
David Szondy is a freelance writer based in Monroe, Washington. An award-winning playwright, he has contributed to Charged and iQ magazine and is the author of the website Tales of Future Past.
All articles by David Szondy
Quote: "By using smaller pixels, more can be placed on a single chip while maintaining the same level of sensitivity, resolution and field of view."
This is not true. Or at least, it's grossly misleading. Somebody got something garbled, somewhere.
Smaller pixels may have similar "sensitivity", but sensitivity is far from the whole story. Smaller pixels HAVE TO be more sensitive, because they collect fewer photons. But in in exchange for that sensitivity, you get noise and false positives, or "hot spots".
The size of the receptor array is directly related to this phenomenon. That's why a 5MP picture from a consumer-grade camera looks decent, while an equally genuine 5MP picture from a phone will look crude and grainy: the array in the dedicated camera is far larger than the one in the phone.
Granted, this is DARPA and MAYBE they have ways to work around it (for a lot of money), but generally speaking, smaller arrays are worse, not better.
I dunno my pictures from my iphone dont look grainy, they look the same as a regular camera, the difference to me seemed to be that a regular camera is more versatile in different lightings and often has mechanical zoom.
That being said im no expert on camera's, there could be a difference that isnt easy to see, my phone does really good outside, and not as good inside, but really i wouldnt say its grainy.
That's the point: the cameras with the smaller image elements suffer from "noise", especially in low-light conditions.
Many people describe this as looking "grainy" but technically grainy means low resolution. But this isn't actually low resolution, it's noise. You can see it most easily when you blow the pictures up: you will see lots of pixels that look like they are more-or-less random colors.
There was an article right here on Gizmag yesterday about a new consumer camera that now uses a LARGER image sensor, and calls it a good thing. I didn't see it until after I wrote the comment above. But why is it better? For the very reasons I gave in that comment.
There are more factors at play in IR spectrum which have a direct effect on the 'quality' of the resultant picture. A CMOS sensor doesn't have, or behave, in the same manner, as an MCT LWIR, or in InGaAs SWIR, and shouldn't be compared on a global apples-to-apples basis. --things like netd, nei, quantum efficiency, the filter characteristics, non-uniformity corrections, etc. will (nearly) all factor into the equation. Here's a quick overview.
Thanks Herman, very informative.
At the end of the second paragraph David writes,
"In a first for an infrared camera, the pixels are also about half the size of the photons it detects."
It seems you are trying to relate the size of a pixel to the wavelength of a detected photon.
The above phrasing contains the implication that a photon has a nonzero or measurable size.
I don't think that's what you meant; but if it is, I'd like to know the size of a photon.
No mention of price points, which is often a long-term projection. Take a look at FLIR products and one can see that there is a large market for a low-cost option - irrespective of the 'graininess'
I'd settle for half that resolution for more sensitivity or a sensor capable of capturing near and far infrared on demand.
Besides, on a helmet cam when you are walking, even 320 x 200 resolution is acceptable. You only want to get a fair idea of the enemy's shape.
Besides, if they can lower resolution it may also result in a smaller overall product.
Over 160,000 people receive our email newsletter
See the stories that matter in your inbox every morning