Skip to main content

A non-local algorithm for image denoising

Published in  2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, this paper introduces two main ideas
  1. Method noise
  2. Non-local (NL) means algorithm to denoise images

Method noise

It is defined as the difference between the original (noisy) image and its denoised version. Some of the intuitions that can be drawn by analysing method noise are

  1. Zero method noise means perfect denoising (complete removal of noise without lose of image data).
  2. If a denoising method performed well, the method noise must look like a noise and should contain as little structure as possible from the original image
The authors then discuss the method noise properties for different denoising filters. They are derived based on the filter properties. We will not be going in detail for each filter as the properties of the filters are known facts. The paper explains those properties using the intuitions of method noise.

NL-means idea

Denoised value at point of an image is the mean of all points whose gaussian neighborhood is similar to the neighborhood of x. This technique is different from local filtering and frequency domain filtering techniques as it takes what the entire image has to offer to help denoise the image rather than only looking at neighboring pixels and noise characteristics.

NL-means algorithm

  1. Given a noisy image Nfor each pixel i, calculate the weighted average of all the pixels in the image to obtain the denoised value for pixel i
  2. The weight given to each pixel in the weighted average is directly proportional to the similarity with pixel i.
    1. All weights are between 0 and 1
    2. Sum of weights is equal to 1
  3. Similarity between two pixels i and j is measured based on the similarity between the gray level vectors of the square neighborhoods of the pixels.
    1. Similarity is measured as a decreasing function (Guassian kernel) of the weighted Euclidean distance.
    2. Based on the similarity, the weights are assigned.
For pixel p, it is clear that neighborhood of points q1 and q2 are similar and hence w(p,q1) and w(p,q2) are larger. Similarly, q3 having a much different neihborhood attributes lower weight to w(p,q3).

The figure above shows the weight distribution of other pixels with respect to the central pixel. White being closer to weight 1 and black to 0.

Why does averaging work?

This averaging of similar pixels obtained from all the over the image, reduces the noise. As we know for a fact that image averaging works on the assumption that the noise in the image follows a random distribution. This way random fluctuations above and below the actual image data gets smoothened out as one averages.

The paper further discussed the consistency of the NL-means algorithm and experimental results. I encourage you to go through the paper and take a look at the mathematical derivations and the following experiments. (All pictures in this post were borrowed from the paper)

Learning links

Comments

Popular Posts

Chest X-Ray Analysis of Tuberculosis by Deep Learning with Segmentation and Augmentation

In this paper , the authors explore the efficiency of lung segmentation, lossless and lossy data augmentation in  computer-aided diagnosis (CADx) of tuberculosis using deep convolutional neural networks applied to a small and not well-balanced Chest X-ray (CXR) dataset. Dataset Shenzhen Hospital (SH) dataset of CXR images was acquired from Shenzhen No. 3 People's Hospital in Shenzhen, China. It contains normal and abnormal CXR images with marks of tuberculosis. Methodology Based on previous literature, attempts to perform training for such small CXR datasets without any pre-processing failed to see good results. So the authors attempted segmenting the lung images before being inputted to the model. This gave demonstrated a more successful training and an increase in prediction accuracy. To perform lung segmentation, i.e. to cut the left and right lung fields from the lung parts in standard CXRs, manually prepared masks were used. The dataset was split into 8:1:1...

Ocean: Object-aware Anchor-free Tracking

The paper titled " Ocean: Object Aware Anchor Free Tracking " presents a novel approach to visual object tracking that is poised to outperform existing anchor-based approaches. The authors propose a unique anchor-free framework named Ocean, designed to address certain challenges in the current field of visual tracking. Introduction Visual object tracking is a crucial part of computer vision technology. The widely utilized anchor-based trackers have their limitations, which this paper attempts to address. The authors present the innovative Ocean framework, designed to transform the visual tracking field by improving adaptability and performance. The Problem with Anchor-Based Trackers Despite their wide usage, anchor-based trackers suffer from some notable drawbacks. They struggle with tracking objects experiencing drastic scale changes or those having high aspect ratios. The anchors, with their fixed scale and fixed ratios, can limit the flexibility of the trackers, making the...