What is the best image downscaling algorithm (quality-wise)?

AlgorithmImageResize

Algorithm Problem Overview


I want to find out which algorithm is the best that can be used for downsizing a raster picture. With best I mean the one that gives the nicest-looking results. I know of bicubic, but is there something better yet? For example, I've heard from some people that Adobe Lightroom has some kind of proprietary algorithm which produces better results than standard bicubic that I was using. Unfortunately I would like to use this algorithm myself in my software, so Adobe's carefully guarded trade secrets won't do.

Added:

I checked out Paint.NET and to my surprise it seems that Super Sampling is better than bicubic when downsizing a picture. That makes me wonder if interpolation algorithms are the way to go at all.

It also reminded me of an algorithm I had "invented" myself, but never implemented. I suppose it also has a name (as something this trivial cannot be the idea of me alone), but I couldn't find it among the popular ones. Super Sampling was the closest one.

The idea is this - for every pixel in target picture, calculate where it would be in the source picture. It would probably overlay one or more other pixels. It would then be possible to calculate the areas and colors of these pixels. Then, to get the color of the target pixel, one would simply calculate the average of these colors, adding their areas as "weights". So, if a target pixel would cover 1/3 of a yellow source pixel, and 1/4 of a green source pixel, I'd get (1/3*yellow + 1/4*green)/(1/3+1/4).

This would naturally be computationally intensive, but it should be as close to the ideal as possible, no?

Is there a name for this algorithm?

Algorithm Solutions


Solution 1 - Algorithm

Unfortunately, I cannot find a link to the original survey, but as Hollywood cinematographers moved from film to digital images, this question came up a lot, so someone (maybe SMPTE, maybe the ASC) gathered a bunch of professional cinematographers and showed them footage that had been rescaled using a bunch of different algorithms. The results were that for these pros looking at huge motion pictures, the consensus was that Mitchell (also known as a high-quality Catmull-Rom) is the best for scaling up and sinc is the best for scaling down. But sinc is a theoretical filter that goes off to infinity and thus cannot be completely implemented, so I don't know what they actually meant by 'sinc'. It probably refers to a truncated version of sinc. Lanczos is one of several practical variants of sinc that tries to improve on just truncating it and is probably the best default choice for scaling down still images. But as usual, it depends on the image and what you want: shrinking a line drawing to preserve lines is, for example, a case where you might prefer an emphasis on preserving edges that would be unwelcome when shrinking a photo of flowers.

There is a good example of the results of various algorithms at Cambridge in Color.

The folks at fxguide put together a lot of information on scaling algorithms (along with a lot of other stuff about compositing and other image processing) which is worth taking a look at. They also include test images that may be useful in doing your own tests.

Now ImageMagick has an extensive guide on resampling filters if you really want to get into it.

It is kind of ironic that there is more controversy about scaling down an image, which is theoretically something that can be done perfectly since you are only throwing away information, than there is about scaling up, where you are trying to add information that doesn't exist. But start with Lanczos.

Solution 2 - Algorithm

There is Lanczos sampling which is slower than bicubic, but produces higher quality images.

Solution 3 - Algorithm

(Bi-)linear and (bi-)cubic resampling are not just ugly but horribly incorrect when downscaling by a factor smaller than 1/2. They will result in very bad aliasing akin to what you'd get if you downscampled by a factor of 1/2 then used nearest-neighbor downsampling.

Personally I would recommend (area-)averaging samples for most downsampling tasks. It's very simple and fast and near-optimal. Gaussian resampling (with radius chosen proportional to the reciprocal of the factor, e.g. radius 5 for downsampling by 1/5) may give better results with a bit more computational overhead, and it's more mathematically sound.

One possible reason to use gaussian resampling is that, unlike most other algorithms, it works correctly (does not introduce artifacts/aliasing) for both upsampling and downsampling, as long as you choose a radius appropriate to the resampling factor. Otherwise to support both directions you need two separate algorithms - area averaging for downsampling (which would degrade to nearest-neighbor for upsampling), and something like (bi-)cubic for upsampling (which would degrade to nearest-neighbor for downsampling). One way of seeing this nice property of gaussian resampling mathematically is that gaussian with very large radius approximates area-averaging, and gaussian with very small radius approximates (bi-)linear interpolation.

Solution 4 - Algorithm

I saw an article on Slashdot about Seam Carving a while ago, it might be worth looking into.

> Seam carving is an image resizing > algorithm developed by Shai Avidan and > Ariel Shamir. This algorithm alters > the dimensions of an image not by > scaling or cropping, but rather by > intelligently removing pixels from (or > adding pixels to) the image that carry > little importance.

Solution 5 - Algorithm

The algorithm you describe is called linear interpolation, and is one of the fastest algorithms, but isn't the best on images.

Solution 6 - Algorithm

> Is there a name for this algorithm?

It might be referred as "box" or "window" resampling in literature. It is actually less computational expensive as you think.

It can also be used to create a intermediate bitmap that is subsequently used by bi-cubic interpolation to avoid aliasing when downsampled by more than 1/2.

Solution 7 - Algorithm

If anyone's interested, here is my C++ implementation of area averaging scaling algorithm:

void area_averaging_image_scale(uint32_t *dst, int dst_width, int dst_height, const uint32_t *src, int src_width, int src_height)
{
    // 1. Scale horizontally (src -> mid)
    int mid_width  = dst_width,
        mid_height = src_height;
    float src_width_div_by_mid_width = float(src_width) / mid_width;
    float mid_width_div_by_src_width = 1.f / src_width_div_by_mid_width;
    std::vector<uint32_t> mid(mid_width * mid_height);
    for (int y=0; y<mid_height; y++)
        for (int x=0; x<mid_width; x++)
            for (int c=0; c<4; c++) {
                float f = x * src_width_div_by_mid_width;
                int i = int(f);
                float d = ((uint8_t*)&src[i + y*src_width])[c] * (float(i) + 1 - f);
                float end = f + src_width_div_by_mid_width;
                int endi = int(end);
                if (end - float(endi) > 1e-4f) {
                    assert(endi < src_width);
                    d += ((uint8_t*)&src[endi + y*src_width])[c] * (end - float(endi));
                }
                for (i++; i < endi; i++)
                    d += ((uint8_t*)&src[i + y*src_width])[c];
                int r = int(d * mid_width_div_by_src_width + 0.5f);
                assert(r <= 255);
                ((uint8_t*)&mid[x + y*mid_width])[c] = r;
            }

    // 2. Scale vertically (mid -> dst)
    float mid_height_div_by_dst_height = float(mid_height) / dst_height;
    float dst_height_div_by_mid_height = 1.f / mid_height_div_by_dst_height;
    for (int y=0; y<dst_height; y++)
        for (int x=0; x<dst_width; x++)
            for (int c=0; c<4; c++) {
                float f = y * mid_height_div_by_dst_height;
                int i = int(f);
                float d = ((uint8_t*)&mid[x + i*mid_width])[c] * (float(i) + 1 - f);
                float end = f + mid_height_div_by_dst_height;
                int endi = int(end);
                if (end - float(endi) > 1e-4f) {
                    assert(endi < mid_height);
                    d += ((uint8_t*)&mid[x + endi*mid_width])[c] * (end - float(endi));
                }
                for (i++; i < endi; i++)
                    d += ((uint8_t*)&mid[x + i*mid_width])[c];
                int r = int(d * dst_height_div_by_mid_height + 0.5f);
                assert(r <= 255);
                ((uint8_t*)&dst[x + y*dst_width])[c] = r;
            }
}

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionVilx-View Question on Stackoverflow
Solution 1 - AlgorithmOld ProView Answer on Stackoverflow
Solution 2 - AlgorithmGreg DeanView Answer on Stackoverflow
Solution 3 - AlgorithmR.. GitHub STOP HELPING ICEView Answer on Stackoverflow
Solution 4 - AlgorithmCrazView Answer on Stackoverflow
Solution 5 - Algorithmuser175680View Answer on Stackoverflow
Solution 6 - AlgorithmThilo KöhlerView Answer on Stackoverflow
Solution 7 - AlgorithmtavView Answer on Stackoverflow