Tuesday 23 April 2013

Image Processor Makes for Better Photos and Performance

Today’s cameras and smartphones can remove common photographic flaws caused by limited dynamic range, which keeps shadow and highlight details from registering and puts noise speckles in darker image areas. But the software-based processors used for this can cut battery life and add annoying pauses between shots.

 One solution to both these problems is an exceptionally fast and efficient image-processor chip optimized for low-power mobile devices. Graduate Student Member Rahul Rithe discussed the chip [shown below] in his paper, “Reconfigurable Processor for Energy-Scalable Computational Photography,” at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco in February.

Rithe, the lead author of the paper, is a doctoral student at MIT’s department of electrical engineering and computer science. The paper was coauthored by MIT research scientist and Member Daniel Ickes; fellow graduate student and Student Member Priyanka Raina; IEEE Fellow Anantha Chandrakasan, who heads the MIT department; and undergraduate research intern Srikanth Tenneti (now at Caltech).

“We wanted to build a high-performance system in hardware that would enhance camera images while consuming considerably less power than other graphics processors,” Rithe says.
A major difference between this dedicated chip and other processors is the approach it takes to bilateral filtering, a nonlinear technique that smooths defects without blurring detail. While fairly common in software, it is rare in hardware because it is computation-intensive, requiring much time and power. To bypass these limitations, the chip uses a bilateral grid approach that can handle bilateral filtering efficiently.“This first hardware implementation of such filtering for photography provides real-time performance with significantly lower power consumption,” says Rithe.




ADVANTAGES
 
The chip, a working model of which was demonstrated at the conference on a circuit board of its own, enhances photos in four ways.
First, like most camera processors, it reduces image noise (randomly occurring pixels of incorrect color or brightness), a problem when photographers shoot in low light. Reducing this with ordinary filters softens detail, but by making it practical to use bilateral filters, the new chip “eliminates random noise but preserves image details,” says Rithe.
A second advantage of the new chip is its rapid processing of the details in high dynamic range (HDR) photos. Used in many computer software programs and some cameras and smartphones, HDR captures details in both very bright and very dark areas of the picture. But doing this takes time because it requires the camera shoot a sequence of three images—one overexposed to bring out details in dark areas, one underexposed to preserve details in bright areas, and, usually, one exposed normally so as to optimize details in between the extremes. These are then merged into a single image. Cameras with built-in HDR typically spend several seconds at this, unable to take more shots until the images are merged. The new chip takes only a few hundred milliseconds to process a 10-megapixel image. “Fast enough for video in real time,” according to Ickes, yet it consumes “dramatically” less power than the CPUs and graphics processing units normally used for HDR.



The third new feature involves a new and unusual low-light-enhancement function that increases detail in low-light pictures without destroying the ambience of the scene’s natural lighting.
“Typically, shooting in low light without flash yields fairly dark images whose details are faint, often obscured by noise,” Rithe notes. “With flash we get bright, sometimes crisper, images, but the harsh lighting destroys the original ambience.”
With the new chip, however, the camera shoots both flash and ambient-light images, then splits each of these into one layer that shows just the large-scale features of the scene and a second layer containing the fine detail. The detail layer from the flash shot, rebalanced to match the color of the ambient light, is merged with the large-feature layer from the non-flash shot. The final image preserves the feeling and color of the ambient illumination while showing greater detail. This new function, yet to be implemented, is based on algorithms from another MIT group, under IEEE Member Frédo Durand, an associate professor of computer science and engineering.
The chip’s fourth enhancement counteracts glare from shooting into a bright sky or other light source. Normally, in a backlit subject the strong light washes out colors and reduces contrast, but the chip’s bilateral filtering chip corrects for this, too.
“Bilateral filtering isn’t novel,” says Rithe. “But we implemented it more efficiently by using a bilateral grid approach, which also came from Durand’s group.” The processor saves power further by relying on a lot of parallelism, which allows it to operate at low frequencies (25–98 MHz) and low voltages (0.5–0.9 V) while maintaining a high throughput. As a result, the chip consumes only 17.8 milliwatts per megapixel at 0.9 V—significantly less than software-based processors, implementing the same functions but consuming tens of watts.
The current version of the chip can be used in cameras, including DSLRs (digital single-lens reflexes) with up to 16-megapixel resolution, and the design is scalable to even higher resolutions. It can also be used in smartphones, which is “the right place for this energy-constrained and small device,” says Ickes.
Adds Chandrakasan, “This design would be suitable for anything with energy constraints, including laptops.” 
Future chips along the lines of this one will be able to perform more—and more complex—image-processing functions with equal efficiency and speed. Rithe and his team are working on them now.


No comments:

Post a Comment