Search
An image may be subject to noise and interference from several sources, including electrical sensor noise, photographic grain noise, and channel errors.
Image noise arising from a noisy sensor or channel transmission errors usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels that are in error often appear visually to be markedly different from their neighbors.
The following figure shows the process of image degradation by additive noise (top) and by degradation function (bottom), such as blurring.
In reality, an image can be degraded by both noise and some degradation function. For simplicity, we will only consider degradation by noise and blurring separately.
There are many different types of image noise. We will discuss only some of them here.
Gaussian Noise
\begin{equation} p(z) = \frac{1}{ \sqrt{2\pi \sigma^2}} \; \exp \left( - \frac{(z - \mu)^2}{2\sigma^2} \right), \end{equation}
Uniform Noise
\begin{equation} p(z) = \begin{cases} \frac{1}{high-low}, & \text{if } low \leq z \leq high \\ 0, & \text{otherwise} \\ \end{cases} \end{equation}
rand
Salt&Pepper Noise
\begin{equation} p(z) = \begin{cases} pPepper, & \text{for } z = 0 \text{ (pepper)} \\ pSalt, & \text{for } z = 2^n - 1 \text{ (salt)} \\ 1-(pPepper+pSalt), & \text{for } 0 < z < 2^n - 1, \\ \end{cases} \end{equation}
n
p
pSalt
Exponential Noise
\begin{equation} p(z) = \begin{cases} 0, & z < 0 \\ \lambda \exp(-\lambda z), & z \geq 0\\ \end{cases} \end{equation}
\begin{equation} z = -\frac{1}{\lambda} \, \ln [ 1 - U(0,1) ] \end{equation}
An image can be blurred by smoothing the image with gaussian kernel. In MATLAB, use the function fspecial to create the gaussian kernel and then use the function imfilter to apply it to the image. Use the parameter 'replicate' for the boundary option.
fspecial
imfilter
Download the package containing source codes and testing images.
.zip
[noiseImage] = noiseGen(dimension, noiseType, noise_parameters)
noiseImage
dimension
noise_type
noise_parameters
[imageOut] = imdegrade(imageIn, degradationType, degradation_parameters)
imageIn
imageOut
degradationType
degradation_parameters
sigma
kernel size
imrestorationGUI
hw_restoration_1.m
Image restoration is a process where an appropriate restoration function is applied to the degraded image. The restoration function depends on the degradation of the specific image. The first step in restoring an image is therefore estimating the degradation type. We will concentrate only on removing additive noise and restoring blurred images.
The noise effects can be reduced by classical statistical filtering techniques. For degraded image $g(s,t)$, where $s,t$ are pixel position in the image. The $S_{xy}$ is image window of the kernel size.
Median filter Median filter is suitable for removal of the Salt&Pepper Noise. \begin{equation} \hat{f}(x,y) = \underset{(s,t) \in S_{xy}}{\mathrm{median}} \, g(s,t) \end{equation} Use the function medfilt2 to apply median filter to an image. Inputs of the function are the image to be filtered and kernel size. This function can only be applied to 2D matrices (modification is required to process RGB images).
medfilt2
Mean filter Mean filter is suitable for removal of the Gaussian Noise. \begin{equation} \hat{f}(x,y) = \frac{1}{mn}\sum_{(s,t) \in S_{xy}} g(s,t), \end{equation} where $m,n$ represent the size of the mean kernel ($m \times n$). In MATLAB you can implement mean filter similarly to gaussian blurring but with different type of kernel (see function documentation for details).
MIN filter \begin{equation} \hat{f}(x,y) = \min_{(s,t) \in S_{xy}} g(s,t) \end{equation} Min filtering can be achieved by applying morphological operation erosion (imerode) to the image. Parameter of the filter is the kernel size.
imerode
MAX filter \begin{equation} \hat{f}(x,y) = \max_{(s,t) \in S_{xy}} g(s,t) \end{equation}
Max filtering can be achieved by applying morphological operation dilatation (imdilate) to the image. Parameter of the filter is the kernel size.
imdilate
What is image sharpness? Sharpness is given by the slope of image derivatives. Edges in an image are located at positions where the image function changes rapidly. The more rapid the change, the sharper the edge is perceived. And conversely, slow changes in intensities are perceived as blurry.
Lack of sharpness: this can be caused by a lot of reasons - lens optics, etc.
Sharp image. Blurry image.
Digital sharpening is a process which increases the contrast along the edges. This however also increases the noise which also represents jumps in the image function. One of the methods of image sharpening is application of unsharp mask unsharp mask.
Unsharp mask is created by subracting a blurred image from the original one. To get the blurred image we apply the convolution of the original image with a Gaussian kernel.
<latex>$U = I - G * I \qquad(1)$</latex>
Sharpened image will be obtained by adding the unsharp mask to the original image.
<latex>$S = I + \alpha U\qquad(2)$</latex>
This will result in an occurance of a halo effect that will inrease the cotrast at edges and also the perception of sharpness. The <latex>\alpha</latex> parameter controls the strength of the effect.
Unsharp mask (mid grey = 0) Sharpened image
\begin{equation} 2 * ceil(3 * [sigma~~sigma]) + 1 \end{equation}
[restoredImage] = imrestore(degradedImage, restorationType, restoration_parameters)
degraded_image
restoration_type
restoration_parameters
What is Fourier transform? Here's a plain-English metaphor (source):
Here's the “math English” version of the above:
Any signal ( e.g. image) can be expressed as a sum of series of sinusoids (sinusoidal variations in brightness across image) (source). Each signal can be represented using:
(Still confused? Watch this video: But what is the Fourier Transform? A visual introduction.)
If $f(x,y)$ is continuous function with real variables $x$ and $y$, the 2D Fourier transform $F(u, v)$ can be expressed as follows: \begin{equation} F(u, v) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y) \exp[-i 2 \pi (ux + vy)] \, dx \, dy \end{equation}
The inverse Fourher transform in 2D: \begin{equation} f(x, y) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} F(u,v) \exp[i 2 \pi (ux + vy)] \, du \, dv \end{equation}
A Fourier transform encodes a whole series of sinusoids through a range of spatial frequencies from zero (i.e. no modulation, i.e. the average brightness of the whole image) all the way up to the “nyquist frequency”, i.e. the highest spatial frequency that can be encoded in the digital image, which is related to the resolution, or size of the pixels. The Fourier transform encodes all of the spatial frequencies present in an image simultaneously as follows. A signal containing only a single spatial frequency of frequency f is plotted as a single peak at point f along the spatial frequency axis, the height of that peak corresponding to the amplitude, or contrast of that sinusoidal signal.
Lowpass filter
Lowpass filters in general smooths out noise in the image. An ideal lowpass filter has the transfer function: \begin{equation} H_{\mathrm{LP}}(u,v) = \begin{cases} 1, & \text{if } D(u,v) \leq D_0 \\ 0, & \text{if } D(u,v) > D_0 \\ \end{cases} \end{equation} where $D_0$ is positive number and $D(u,v)$ is the distance from point $(u,v)$to the center of the filter. The locus of points for which $D(u,v) = D_0$ is a circle. Because filter multiplies the Fourier transform of an image, we see that an ideal filter “cuts off” (multiplies by 0) all components of $F(u, v)$ outside the circle and leaves unchanged (multiplies by 1) all components on, or inside, the circle.
Highpass filter
Highpass filtering sharpens the image by attenuating the low frequencies and leaving the high frequencies of the Fourier transform relatively unchanged. This filter also amplifies noise in the image. It allow high frequency components of image to pass through. Highpass filter can improve image by sharpening details, however, it can also degrade the image quality.
Given the transfer function $H_{\mathrm{LP}}(u,v)$ of a lowpass filter, the transfer function of the corresponding high pass filter is given by: \begin{equation} H_{\mathrm{HP}}(u,v) = 1-H_{\mathrm{LP}}(u,v) \end{equation}
Therefore: \begin{equation} H_{\mathrm{HP}}(u,v) = \begin{cases} 0, & \text{if } D(u,v) \leq D_0 \\ 1, & \text{if } D(u,v) > D_0, \\ \end{cases} \end{equation}
Highpass / Lowlass filtering
The task is to create mask and apply this mask in the frequency domain. The mask is represented as white background and black circle with different size (based on filter size) for High-pass filter and black background and white circle for Low pass filter. To apply the mask you should multiply this mask with result of Fourier transform.
disk
filterRadius
Artifacts removal 1
The task is to remove artifacts in the image of a cameraman. The Frequency domain clearly shows regular artifacts. The task is to remove (set to zero) those frequencies to restore image. You should find and remove white artifacts in frequency domain. You should use the features of the error (repetition, symmetry).
Artifacts removal 2
The task is to remove artifacts in the image of landscape. The Frequency domain clearly shows frequencies of the artifacts. The task is to remove (set to zero) those frequencies to restore image.
!!Look out for the symmetry in the Frequency domain!
[modifiedImage, inputFFT, outputFFT] = HLpassFilter(im, s, lowHIGH)
im
lowHIGH
s
[restoredImage, originalImage, artifactFFT, modifiedFFT] = artifact_removal1()
[restoredImage, originalImage, artifactFFT, modifiedFFT] = artifact_removal2()
fftGUI
hw_restoration_2.m
artifact_removal1
artifact_removal2
HLpassFilter
fft2
ifft2
fftshift
ifftshift
Monday 11.1.2020, 23:59
Please note: Do not put your functions into any subdirectories. That is, when your submission ZIP archive is decompressed, the files should extract to the same directory where the ZIP archive is. Upload only the files containing functions implemented by yourself, both the required and supplementary ones.