Skip to main content

A non-local algorithm for image denoising

Published in  2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, this paper introduces two main ideas
  1. Method noise
  2. Non-local (NL) means algorithm to denoise images

Method noise

It is defined as the difference between the original (noisy) image and its denoised version. Some of the intuitions that can be drawn by analysing method noise are

  1. Zero method noise means perfect denoising (complete removal of noise without lose of image data).
  2. If a denoising method performed well, the method noise must look like a noise and should contain as little structure as possible from the original image
The authors then discuss the method noise properties for different denoising filters. They are derived based on the filter properties. We will not be going in detail for each filter as the properties of the filters are known facts. The paper explains those properties using the intuitions of method noise.

NL-means idea

Denoised value at point of an image is the mean of all points whose gaussian neighborhood is similar to the neighborhood of x. This technique is different from local filtering and frequency domain filtering techniques as it takes what the entire image has to offer to help denoise the image rather than only looking at neighboring pixels and noise characteristics.

NL-means algorithm

  1. Given a noisy image Nfor each pixel i, calculate the weighted average of all the pixels in the image to obtain the denoised value for pixel i
  2. The weight given to each pixel in the weighted average is directly proportional to the similarity with pixel i.
    1. All weights are between 0 and 1
    2. Sum of weights is equal to 1
  3. Similarity between two pixels i and j is measured based on the similarity between the gray level vectors of the square neighborhoods of the pixels.
    1. Similarity is measured as a decreasing function (Guassian kernel) of the weighted Euclidean distance.
    2. Based on the similarity, the weights are assigned.
For pixel p, it is clear that neighborhood of points q1 and q2 are similar and hence w(p,q1) and w(p,q2) are larger. Similarly, q3 having a much different neihborhood attributes lower weight to w(p,q3).

The figure above shows the weight distribution of other pixels with respect to the central pixel. White being closer to weight 1 and black to 0.

Why does averaging work?

This averaging of similar pixels obtained from all the over the image, reduces the noise. As we know for a fact that image averaging works on the assumption that the noise in the image follows a random distribution. This way random fluctuations above and below the actual image data gets smoothened out as one averages.

The paper further discussed the consistency of the NL-means algorithm and experimental results. I encourage you to go through the paper and take a look at the mathematical derivations and the following experiments. (All pictures in this post were borrowed from the paper)

Learning links

Comments

Popular Posts

Network In Network

In this paper , the authors introduce a new network structure for the traditional CNN to better extract and interpret latent features. It is named "Network In Network (NIN)". NIN vs tradional CNN In a traditional CNN, convolutional layers and spatial pooling layers are stacked followed by fully connected layers and an output layer. The convolution layers generate feature maps by linear convolutional filters followed by non-linear activation functions. The NIN structure addresses the following 2 limitations of a traditional CNN. Kernels/filters used for each CNN layer works well when the features to be extracted are linearly separable. Fully connected layers at the end of the CNN leads to over-fitting the training data.  Convolution with linear filter vs Neural network The convolution layers involve a kernel that slides over the previous field (input or layers) and extracts features. The kernel is usually a matrix with which convolution is done. This is a linear operation. Mea...

Joint Pose and Shape Estimation of Vehicles from LiDAR Data

In this paper , the authors address the problem of estimating the pose and shape of vehicles from LiDAR Data. This is a common problem to be solved in autonomous vehicle applications. Autonomous vehicles are equipped with many sensors to perceive the world around them. LiDAR being one of them is what the authors focus on in this paper. A key requirement of the perception system is to identify other vehicles in the road and make decisions based on their pose and shape. The authors put forth a pipeline that jointly determines pose and shape from LiDAR data.  More about Pose and Shape Estimation LiDAR sensors capture the world around them in point clouds. Often, the first step in LiDAR processing is to perform some sort of clustering or segmentation, to isolate parts of the point cloud which belong to individual objects.  The next step is to infer the pose and shape of the object. This is mostly done by a modal perception . Meaning the whole object is perceived based on partial s...

BLIP: Bootstrapping Language-Image Pretraining for Unified Vision-Language Understanding

BLIP is a new vision-language model proposed by Microsoft Research Asia in 2022. It introduces a bootstrapping method to learn from noisy image-text pairs scraped from the web. The BLIP Framework BLIP consists of three key components: MED  - A multimodal encoder-decoder model that can encode images, text, and generate image-grounded text. Captioner  - Fine-tuned on COCO to generate captions for web images. Filter  - Fine-tuned on COCO to filter noisy image-text pairs. The pretraining process follows these steps: Collect noisy image-text pairs from the web. Pretrain MED on this data. Finetune captioner and filter on the COCO dataset. Use captioner to generate new captions for web images. Filter noisy pairs using the filter model. Repeat the process by pretraining on a cleaned dataset. This bootstrapping allows BLIP to learn from web-scale noisy data in a self-supervised manner. Innovations in BLIP Some interesting aspects of BLIP: Combines encoder-decoder capability in one...