|Technologies Overview||[Superresolution FAQ]||Examples and tests||Technology Licensing|
A: Super-resolution is a technique to enhance the resolution of an imaging system. In this FAQ we will refer to the particular type of super-resolution which can improve resolution of digital imaging systems beyond their sensor and optics limits.
A: It looks like a science fiction, but there are solid physical concepts behind the process. To be sure, there are limits to what you can achieve with super-resolution processing, which depends on numerous factors (see ""What levels of increased resolution are realistic?" for an in-depth discussion on limits).
A: For a concise answer on all types of super-resolution, please consult Wikipedia. There you will find a deeper explanation for any particular case of multi-image digital super-resolution: There are two key components in every digital imaging system: the sensor and the lens. There are two different types of image degradation introduced by these two components individually:
This is a long one. Let us model an "ideal" camera - with ideal lens (no blur, no distortions) and a sensor completely covered by an array of pixels. Every pixel registers a signal proportional to the amount of light it received.
Luckily, real scenes usually do not have exactly the same structure as the sensor has. To make our model more realistic, we will tilt the lines - so if in some part of the picture the edges of the lines match the edges of the pixels in the sensor, they will not match in other parts. This is how the tilted lines will be imaged by our ideal camera:
The contrast between black and white lines differs from 100% of the original contrast to none. Looks strange already, doesn't it?
What happens if we try to image line pairs of higher frequency? See the pictures below: the lines are visible, but they have different directions, and, moreover, thicker width - that is, lower frequency than in the original!
This is caused by so-called aliasing. The sensor, which is not able to image a pattern of frequency higher than 0.5 cycles/pixel, delivers not only lower contrast but completely wrong pictures. If the scene being imaged has a regular pattern, the artifacts are known as Moiré pattern.
Digital cameras usually have anti-aliasing filters in front of the sensors. Such filters prevent the appearance of aliasing artifacts, simply blurring high-frequency patterns. With the ideal anti-aliasing filter, the patterns shown above would have been imaged as a completely uniform grey field. Fortunately for us, no ideal anti-aliasing filter exists and in a real camera the aliased components are just attenuated to some degree.
A: The first step is to accurately align individual low-resolution images with sub-pixel precision.
After the images are aligned, a number of techniques are possible, both iterative and non-iterative, complex or simple, slow or fast. What is common in all of the techniques is that information encapsulated in the aliased components is used to recover spatial frequencies beyond sensor resolution and a de-blurring is used to reverse degradation caused by the optical system.
Of course, the real reconstruction process is much more complex due to the presence of at least the following phenomena:
A: It is highly variable depending on the optical system exposure conditions and what post-processing is applied. As a rule of thumb, you can expect and increase of 2x effective resolution from a real-life average system (see MTF measurements) using our methods. We've seen up to a 4x increases in some cases. You can get even higher results under controlled laboratory conditions, but that's only of theoretical interest.
A: Here are some rules:
A: There are two major classes of super-resolution:
Recognition-based super-resolution is trying to detect or identify certain pre-configured patterns in the low resolution data. It has a limited application area (e.g. forensic face-detection).
It can be dependent or independent of a particular imaging system. The image-system-dependent method has the advantage of taking into account all the characteristics of a particular system and thus producing better results.
Super-resolution methods can also be divided by source/output type:
Single-image - in this case we're talking about deblurring, and there is no real resolution increase.
Multiple still images in, single image out - used in photography
Video-sequence super-resolution - a wide variety of methods were recently brought into existence due to the growing popularity of HDTV. Most of them are not based on real super-resolution methods and are as simple as edge enhancement.
For a comparison of various methods, please refer to Superresolution comparison paper.
A: There are lots of good papers available on the internet; here are just two of them to start:
One of the first papers on super-resolution which seemed to inspire some of the modern methods:
Michal Irani and Shmuel Peleg, "Super Resolution From Image Sequences", ICPR, 2:115--120, June 1990.
A paper from Microsoft Research that attempts to estimate the practical limits of super-resolution. The scope of this paper is limited to a particular subclass of linear-only, reconstruction-based super-resolution algorithms. In any case, the obtained bounds do correlate well with the practical results (top limit is ~5x under ideal conditions, ~2x in real life).
Zhouchen Lin and Heung-Yeung Shum, "Fundamental Limits of Reconstruction-Based Superresolution Algorithms under Local Translation"
A: The main properties are:
- Excellent performance under noisy conditions (see example)
Another property that can be considered a weakness in applications where the imaging system is unknown is that to obtain optimal performance the algorithm is tuned for a particular imaging system (individual profiles are used for each sensor/lens combination). See this example though.
Developed by Almalence :: Design by A.Green