======Image restoration====== ====Image degradation==== An image may be subject to noise and interference from several sources, including electrical sensor noise, photographic grain noise, and channel errors. Image noise arising from a noisy sensor or channel transmission errors usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels that are in error often appear visually to be markedly different from their neighbors. The following figure shows the process of image degradation by additive noise (top) and by degradation function (bottom), such as blurring. | {{courses:b4m33dzo:labs:im_degradation.jpg|}} | | Diagrams showing the process of image degradation | In reality, an image can be degraded by both noise and some degradation function. For simplicity, we will only consider degradation by noise and blurring separately. ====Noise models==== There are many different types of image noise. We will discuss only some of them here. **Gaussian Noise** * The Gaussian noise is generated using normal distribution function, where the probability $p(z)$ of a (noise) pixel having an intensity value of $z$ can be calculated as: \begin{equation} p(z) = \frac{1}{ \sqrt{2\pi \sigma^2}} \; \exp \left( - \frac{(z - \mu)^2}{2\sigma^2} \right), \end{equation} * where $\mu$ is the mean value and $\sigma$ is the standard deviation of the distribution. **Uniform Noise** * Uniform noise generates a noise sequence from uniform distribution delimited by values $low$ and $high$: \begin{equation} p(z) = \begin{cases} \frac{1}{high-low}, & \text{if } low \leq z \leq high \\ 0, & \text{otherwise} \\ \end{cases} \end{equation} * Values of and uniform noise will be therefore uniformly distributed in range [$low$, $high$]. * The **generator** of Uniform Noise is MATLAB function ''rand'' that generates values from distribution $U(0,1)$. **Salt&Pepper Noise** * This type of noise is impulsive noise, which is caused by analog to digital converter errors, or by the errors in the transmition. In this noise, certain amount of pixels are either white or black. It can be eliminated by median filtering. * The probability of a pixel having intensity $z$ in case of Salt&Pepper noise is: \begin{equation} p(z) = \begin{cases} pPepper, & \text{for } z = 0 \text{ (pepper)} \\ pSalt, & \text{for } z = 2^n - 1 \text{ (salt)} \\ 1-(pPepper+pSalt), & \text{for } 0 < z < 2^n - 1, \\ \end{cases} \end{equation} * where ''n'' is the number of intensity values. * To **generate** Salt&Pepper noise, use MATLAB's function ''rand'' to create matrix of uniformly distributed random numbers between 0 and 1. As the values are uniformly distributed, approximately ''p'' elements of the matrix will have value ''p'' or lower. Therefore, if you set pixel values in an image to zero at every position where there is value equal to ''p'' or lower in the above generated matrix, you would get a "peppered" image where the number of pepper noise pixels will be (approx.) equal to ''p''. Similarly, you can get salt noise using values equal to 1 - ''pSalt'' or higher. In the Salt&Pepper noise image, set values of pepper pixels to -1 (instead of 0) and salt pixels to 1 (i.e. the maximum intensity value). This is important so that the noise image can be added to an image to generate noisy image. **Exponential Noise** * In case of Exponential Noise the probability $p(z)$ is calculated as follows: \begin{equation} p(z) = \begin{cases} 0, & z < 0 \\ \lambda \exp(-\lambda z), & z \geq \lambda \\ \end{cases} \end{equation} * The **generator** of Exponential Noise is: \begin{equation} z = -\frac{1}{\lambda} \, \ln [ 1 - U(0,1) ] \end{equation} |Input Image| Gauss Noise | Image with noise | Noise Histogram| |{{courses:b4m33dzo:labs:tower.jpg?220|}}|{{courses:b4m33dzo:labs:gaussiannoise.png?220|}}|{{courses:b4m33dzo:labs:gaussiandegrade.png?220|}}|{{courses:b4m33dzo:labs:gaussianhistogram.png?220|}}| |Input Image| Uniform Noise | Image with noise | Noise Histogram| |{{courses:b4m33dzo:labs:tower.jpg?220|}}|{{courses:b4m33dzo:labs:uniformnoise.png?220|}}|{{courses:b4m33dzo:labs:uniformdegrade.png?220|}}|{{courses:b4m33dzo:labs:uniformhistogram.png?220|}}| |Input Image| Salt&Pepper Noise | Image with noise | Noise Histogram| |{{courses:b4m33dzo:labs:tower.jpg?220|}}|{{courses:b4m33dzo:labs:salt_pepper.png?220|}}|{{courses:b4m33dzo:labs:salt_pepperdegrade.png?220|}}|{{courses:b4m33dzo:labs:salt_pepperhistogram.png?220|}}| |Input Image| Exponential Noise | Image with noise | Noise Histogram| |{{courses:b4m33dzo:labs:tower.jpg?220|}}|{{courses:b4m33dzo:labs:exponentialnoise.png?220|}}|{{courses:b4m33dzo:labs:exponentialdegrade.png?220|}}|{{courses:b4m33dzo:labs:exponentialhistogram.png?220|}}| ====Image blurring==== An image can be blurred by smoothing the image with gaussian kernel. In MATLAB, use the function ''fspecial'' to create the gaussian kernel and then use the function ''imfilter'' to apply it to the image. Use the parameter 'replicate' for the boundary option. ====Assignment - Adding noise into image==== Download the {{courses:b4m33dzo:labs:t6_restoration_student.zip|package}} containing source codes and testing images. To fulfill this assignment, you need to submit these files (all packed in a single ''.zip'' file) into the [[https://cw.felk.cvut.cz/upload/| upload system]] - Implement the following functions * **''[noiseImage] = noiseGen(dimension, noiseType, noise_parameters)''**: Generate ''noiseImage'', that is an image of the specified size with pixel intensity values generated from the specified probability distribution. The parameter ''dimension'' represents the size of the noise image. It can be either an image in which case the size of the image is used as the size of the noise image or it can be a vector specifying the exact size of the noise image. ''noise_type'' specifies the noise model; possible values are: 'salt & pepper', 'uniform', 'exponential' , and 'gaussian'. The argument ''noise_parameters'' is used to specify the noise parameters and depends on the type of the noise (see the code for individual noises for more detail). * **''[imageOut] = imdegrade(imageIn, degradationType, degradation_parameters)''**: Apply the specified degradation type to ''imageIn'' and return the degraded image ''imageOut''. Argument ''degradationType'' can either have value 'noise' or 'blur'. For 'noise', the last argument of the function (''degradation_parameters'') should be the noise image and for 'blur' it should be blur parameters, i.e. ''sigma'' and ''kernel size''. - Test the functions using the ''imrestorationGUI''. You can start the GUI by running the script ''hw_restoration_1.m''. In the GUI, load an image using the **Load image** button. You can convert an RGB image to a gray-scale image using the **RGB to gray** button. The **Revert image** button removes any changes you made in the GUI and reloads the image to its original form. The **Reset** button also unloads the image. After you loaded an image, you can choose degradation type from the drop-down menu. You can adjust the degradation parameters below the drop-down menu and click the **Apply image degradation** button to generate the noise image and apply it the input image. The results will be shown in the GUI. For 'noise' type degradation, histograms of the noise, original, and degraded image will also be shown. Check that the generated noise histogram is similar to the appropriate histogram shown above. ====Image restoration techniques==== | {{ :courses:b4m33dzo:labs:im_restoration.jpg |}} | | Diagram showing the process of image restoration | Image restoration is a process where an appropriate restoration function is applied to the degraded image. The restoration function depends on the degradation of the specific image. The first step in restoring an image is therefore estimating the degradation type. We will concentrate only on removing additive noise and restoring blurred images. ===Noise removal=== The noise effects can be reduced by classical statistical filtering techniques. For degraded image $g(s,t)$, where $s,t$ are pixel position in the image. The $S_{xy}$ is image window of the kernel size. ** Median filter ** Median filter is suitable for removal of the Salt&Pepper Noise. \begin{equation} \hat{f}(x,y) = \underset{(s,t) \in S_{xy}}{\mathrm{median}} \, g(s,t) \end{equation} Use the function ''medfilt2'' to apply median filter to an image. Inputs of the function are the image to be filtered and kernel size. This function can only be applied to 2D matrices (modification is required to process RGB images). ** Mean filter ** Mean filter is suitable for removal of the Gaussian Noise. \begin{equation} \hat{f}(x,y) = \frac{1}{mn}\sum_{(s,t) \in S_{xy}} g(s,t), \end{equation} where $m,n$ represent the size of the mean kernel ($m \times n$). In MATLAB you can implement mean filter similarly to gaussian blurring but with different type of kernel (see function documentation for details). ** MIN filter ** \begin{equation} \hat{f}(x,y) = \min_{(s,t) \in S_{xy}} g(s,t) \end{equation} Min filtering can be achieved by applying morphological operation erosion (''imerode'') to the image. Parameter of the filter is the kernel size. ** MAX filter ** \begin{equation} \hat{f}(x,y) = \max_{(s,t) \in S_{xy}} g(s,t) \end{equation} Max filtering can be achieved by applying morphological operation dilatation (''imdilate'') to the image. Parameter of the filter is the kernel size. ===Image sharpening=== **What is image sharpness?** Sharpness is given by the slope of image derivatives. Edges in an image are located at positions where the image function changes rapidly. The more rapid the change, the sharper the edge is perceived. And conversely, slow changes in intensities are perceived as blurry. **Lack of sharpness**: this can be caused by a lot of reasons - lens optics, etc. {{:courses:a4m33dzo:cviceni:uloha03:clipbrd1.png|Sharp image.}} //Sharp image.// {{:courses:a4m33dzo:cviceni:uloha03:clipbrd2.png|Blurry image.}} //Blurry image.// **Digital sharpening** is a process which increases the contrast along the edges. This however also increases the noise which also represents jumps in the image function. One of the methods of image sharpening is application of unsharp mask [[http://en.wikipedia.org/wiki/Unsharp_masking | unsharp mask]]. ==Unsharp mask== Unsharp mask is created by subracting a blurred image from the original one. To get the blurred image we apply the convolution of the original image with a Gaussian kernel. $U = I - G * I \qquad(1)$ Sharpened image will be obtained by adding the unsharp mask to the original image. $S = I + \alpha U\qquad(2)$ This will result in an occurance of a //halo// effect that will inrease the cotrast at edges and also the perception of sharpness. The \alpha parameter controls the strength of the effect. {{:courses:a4m33dzo:cviceni:uloha03:clipbrd3.png|Unsharp mask}} //Unsharp mask (mid grey = 0)// {{:courses:a4m33dzo:cviceni:uloha03:clipbrd4.png|Sharpened image}} //Sharpened image// ====Assignment - Image restoration==== Implement the following function * **''[restoredImage] = imrestore(degradedImage, restorationType, restoration_parameters)''**: This function restores the ''degraded_image'' using the selected type of restoration technique. Argument ''restoration_type'' specifies the technique to use; it can be 'mean' for averaging filter, 'median' for median filter, 'min' or 'max' for minimum or maximum filters and 'sharpen' for sharpening. The possible values of the argument ''restoration_parameters'' depends on the specified restoration technique, see the function's code for more information. ====Fourier transform==== What is Fourier transform? Here's a plain-English metaphor [[https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/| (source)]]: * **What does the Fourier Transform do?** Given a smoothie, it finds the recipe. * **How?** Run the smoothie through filters to extract each ingredient. * **Why?** Recipes are easier to analyze, compare, and modify than the smoothie itself. * **How do we get the smoothie back?** Blend the ingredients. Here's the "math English" version of the above: * The Fourier Transform takes a time-based pattern, measures every possible cycle, and returns the overall "cycle recipe" (the amplitude, offset, & rotation speed for every cycle that was found). === Mathematics background=== Any signal ( e.g. image) can be expressed as a sum of series of sinusoids (sinusoidal variations in brightness across image) ([[http://cns-alumni.bu.edu/~slehar/fourier/fourier.html|source]]). Each signal can be represented using: - spatial frequency: the frequency across space (the x-axis in this case) with which the brightness modulates. - magnitude (positive or negative): corresponds to contrast, or the difference between the darkest and brightest peaks of the image. A negative magnitude represents a contrast-reversal. - phase: represents how the wave is shifted relative to the origin. (Still confused? Watch this video: [[https://www.youtube.com/watch?v=spUNpyF58BY|But what is the Fourier Transform? A visual introduction.]]) If $f(x,y)$ is continuous function with real variables $x$ and $y$, the 2D Fourier transform $F(u, v)$ can be expressed as follows: \begin{equation} F(u, v) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,y) \exp[-i 2 \pi (ux + vy)] \, dx \, dy \end{equation} The inverse Fourher transform in 2D: \begin{equation} f(x, y) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} F(u,v) \exp[i 2 \pi (ux + vy)] \, du \, dv \end{equation} A Fourier transform encodes a whole series of sinusoids through a range of spatial frequencies from zero (i.e. no modulation, i.e. the average brightness of the whole image) all the way up to the "nyquist frequency", i.e. the highest spatial frequency that can be encoded in the digital image, which is related to the resolution, or size of the pixels. The Fourier transform encodes all of the spatial frequencies present in an image simultaneously as follows. A signal containing only a single spatial frequency of frequency f is plotted as a single peak at point f along the spatial frequency axis, the height of that peak corresponding to the amplitude, or contrast of that sinusoidal signal. ====Image enhancement using Fourier transform==== **Lowpass filter** Lowpass filters in general smooths out noise in the image. An ideal lowpass filter has the transfer function: \begin{equation} H_{\mathrm{LP}}(u,v) = \begin{cases} 1, & \text{if } D(u,v) \leq D_0 \\ 0, & \text{if } D(u,v) > D_0 \\ \end{cases} \end{equation} where $D_0$ is positive number and $D(u,v)$ is the distance from point $(u,v)$to the center of the filter. The locus of points for which $D(u,v) = D_0$ is a circle. Because filter multiplies the Fourier transform of an image, we see that an ideal filter "cuts off" (multiplies by 0) all components of $F(u, v)$ outside the circle and leaves unchanged (multiplies by 1) all components on, or inside, the circle. |Filter| Input image | Fourier domain | Restored image | | Low pass filter| {{courses:b4m33dzo:labs:image_fft_example.jpg?220|}} | {{courses:b4m33dzo:labs:low-pass-filter-fft.png?220|}} | {{courses:b4m33dzo:labs:low-pass-filter.png?220|}} | | Low pass filter| {{courses:b4m33dzo:labs:image_fft_example.jpg?220|}} | {{courses:b4m33dzo:labs:low-pass-filter-fft2.png?220|}} | {{courses:b4m33dzo:labs:low-pass-filter2.png?220|}} | **Highpass filter** Highpass filtering sharpens the image by attenuating the low frequencies and leaving the high frequencies of the Fourier transform relatively unchanged. This filter also amplifies noise in the image. It allow high frequency components of image to pass through. Highpass filter can improve image by sharpening details, however, it can also degrade the image quality. Given the transfer function $H_{\mathrm{LP}}(u,v)$ of a lowpass filter, the transfer function of the corresponding high pass filter is given by: \begin{equation} H_{\mathrm{HP}}(u,v) = 1-H_{\mathrm{LP}}(u,v) \end{equation} Therefore: \begin{equation} H_{\mathrm{HP}}(u,v) = \begin{cases} 0, & \text{if } D(u,v) \leq D_0 \\ 1, & \text{if } D(u,v) > D_0, \\ \end{cases} \end{equation} |Filter| Input image | Fourier domain | Restored image | | High pass filter| {{courses:b4m33dzo:labs:image_fft_example.jpg?220|}} | {{courses:b4m33dzo:labs:high-pass-filter-fft.png?220|}} | {{courses:b4m33dzo:labs:high-pass-filter.png?220|}} | | High pass filter| {{courses:b4m33dzo:labs:image_fft_example.jpg?220|}} | {{courses:b4m33dzo:labs:high-pass-filter-fft2.png?220|}} | {{courses:b4m33dzo:labs:high-pass-filter2.png?220|}} | ====Assignment==== **Highpass / Lowlass filtering** The task is to create mask and apply this mask in the frequency domain. The mask is represented as white background and black circle with different size (based on filter size) for High-pass filter and black background and white circle for Low pass filter. To apply the mask you should multiply this mask with result of Fourier transform. To create mask for the filter, use MATLAB's function ''fspecial'' to create a ''disk'' kernel with the ''filterRadius'' size (see code of the HLpassFilter function). Set the non-zero values of the kernel to 1 in case of lowpass filter and 0 for highpass filter. The rest of the values should be set to 0 or 1 respectively. Then position the disk kernel to the center of the mask. ** Look out for the shift and the symmetry in the Frequency domain.** |Input image| Input FFT | Mask | High pass filter | Restored image | | {{courses:b4m33dzo:labs:5_cam_original.png?220|}} | {{courses:b4m33dzo:labs:5_cam_fourier.png?220|}} |{{courses:b4m33dzo:labs:5_cam_high_pass_mask.png?220|}} | {{courses:b4m33dzo:labs:5_cam_high_pass.png?220|}} |{{courses:b4m33dzo:labs:5_cam_restored_high_pass.png?220|}} | |Input image| Input FFT | Mask | Low pass filter | Restored image | | {{courses:b4m33dzo:labs:5_cam_original.png?220|}} | {{courses:b4m33dzo:labs:5_cam_fourier.png?220|}} |{{courses:b4m33dzo:labs:5_cam_low_pass_mask.png?220|}} | {{courses:b4m33dzo:labs:5_cam_low_pass.png?220|}} |{{courses:b4m33dzo:labs:5_cam_restored_low_pass.png?220|}} | **Artifacts removal 1** The task is to remove artifacts in the image of a cameraman. The Frequency domain clearly shows regular artifacts. The task is to remove (set to zero) those frequencies to restore image. You should find and remove white artifacts in frequency domain. You should use the features of the error (repetition, symmetry). |Input image| Input FFT | Restored image | Difference | | {{courses:b4m33dzo:labs:5_input_cam.png?220|}} | {{courses:b4m33dzo:labs:5_fft_cam.png?220|}} |{{courses:b4m33dzo:labs:5_restored_cam.png?220|}} | {{courses:b4m33dzo:labs:5_diff_cam.png?220|}} | **Artifacts removal 2** The task is to remove artifacts in the image of landscape. The Frequency domain clearly shows frequencies of the artifacts. The task is to remove (set to zero) those frequencies to restore image. **!!Look out for the symmetry in the Frequency domain!** |Input image| Input FFT | Restored image | Difference | | {{courses:b4m33dzo:labs:5_freqdist.png?220|}} | {{courses:b4m33dzo:labs:5_artifactfft.png?220|}} |{{courses:b4m33dzo:labs:5_restored.png?220|}} | {{courses:b4m33dzo:labs:5_diff.png?220|}} | - Implement the following functions * **''[modifiedImage, inputFFT, outputFFT] = HLpassFilter(im, s, lowHIGH)''**: Filter the image ''im'' using low-pass/high-pass filter defined using ''lowHIGH'' with the required size ''s'' (diameter of circle). * **''[restoredImage, originalImage, artifactFFT, modifiedFFT] = artifact_removal1()''**. This filter will remove added artifact in the form of periodic points in the image. * **''[restoredImage, originalImage, artifactFFT, modifiedFFT] = artifact_removal2()''**. This filter will remove artifact from the landscape image. - Test the functions using the ''fftGUI''. You can start the GUI by running the script ''hw_restoration_2.m''. In the GUI, you can select method such as Artifact removal / Low-pass filter / High-pass filter. * In the artifact removal, you can choose 2 different input images **cameraman grid** (for testing ''artifact_removal1'') and **freqdist** (for testing ''artifact_removal2''). * The ''HLpassFilter'' can be tested by selecting **Low-pass filter** / **High-pass filter**. You can load an image using the **Load image** button. You can set filter size and subsequently apply filter by using **Apply filter** button. * The results will be shown in the GUI. You can see input image, frequency domain of the image, modified frequency domain and output image. * Use functions ''fft2'', ''ifft2'', ''fftshift'' and ''ifftshift''. ======Assignment deadline====== **Monday 14.1.2019, 23:59** **Please note**: Do not put your functions into any subdirectories. That is, when your submission ZIP archive is decompressed, the files should extract to the same directory where the ZIP archive is. Upload only the files containing functions implemented by yourself, both the required and supplementary ones.