===== Lab 03 : CT Images ===== Topics: * **Visualization** We will take a look at the MITK software and inspect the 3D visualization of CT images. * **Classification** We will go through a short introduction to classification and look at the ilastik software. /* * [[courses:zsl:labs2020_07_ultrasound | Ultrasound assignment (Lab 7)]] - you can acquire the US data for HW06 */ ==== HW03 Homework === * **[3 pts]** [[#(A) CT Data Processing]] * **[2 pts]** [[#(B) CT Image Segmentation]] ==== (A) CT Data Processing ==== Download the {{http://mitk.org/wiki/Downloads| MITK Workbench}} application. Load the testing image {{ :courses:zsl:ct_whole_body.nrrd.zip |}}. //Optionally you can try data //lowdose_CT.nii.gz// from https://cmp.felk.cvut.cz/~herinjan/dt45knp3/, if the data are too large for your PC to process, you can try the smaller //lowdose_CT_cropped.nii.gz// file.// After the file is opened, the image's name will appear as entry in the //Data Manager// view, you will also see four rendering windows, they show the standard anatomical planes and a 3D-visualization of the image. The //Level Window slider// appears on the right. {{ :courses:zsl:wbench_annotated.png?600|}} - **[0.5 pt]** **Adapt the Level Window slider** to show only bone structures in the image. * purpose of the Level Window is to define how the image values, which are in Hounsfield (internally as int32 datatype) are mapped to the standard display range. Is the level window set to a range $[W_l, W_u]$, then all values lower or equal $W_l$ will have intensity 0, values equal or larger than $W_u$ will have intensity 255 and the intensity of points with values within $[W_l, W_u]$ are linearly interpolated. * it is typically defined by its mean value $(W_u + W_l) / 2$ and its span $W_u - W_l$, these values can be entered into the boxes below the slider * Which settings for LW did you use? Insert a screenshot for each image in your report. - **[0.5 pt]** **Create a segmentation of bone structures.** Open the //Segmentation// plugin and use the ''UL Threshold'' segmentation tool from the ''3D Tools'' pane. Select a lower and upper threshold and create a segmentation. Create a surface representation (right-click on the segmentation node in the //DataManager// opens a context menu, choose the entry ''Create polygon model'') and make a screenshot for the report. Comment on your choice of thresholds used for segmentation * first, you need to select ''Create new segmentation'' in the upper part of the segmentation plugin, the tools will be active afterwards - **[1 pt]** **Manual segmentation task** * manual annotation is still the most common task done by radiologists to create segmentation. * Add a new segmentation to the femur image and use a tool of your choice to create a segmentation of muscle structure in **three consecutive** axial planes * Report which tool you used, export the segmentation as '.nii.gz' format (''Save'' segmentation image from //Data Manager//) and upload it together with your report (the segmentation image should not be larger than 10MB). - **[1 pt]** **Volume Visualization** Open the //Volume Rendering// plugin, adapt the transfer function to get a 3D visualization of bone structures, put a screenshot (from the 3D rendering window) into your report. ==== (B) CT Image Segmentation ==== ** Segmentation ** You have experienced, how tedious a manual segmentation can be, which is one of the main motivations for development of (semi-)automated segmentation methods. In this part, we will look at segmentation by means of //pixel classification//. It is about finding set of values (//features//) for each pixel, that allows to construct a criterion which assigns a class label to each pixel. The threshold-based segmentation from first part is a simple example of this principle. We have the pixel value $I(x,y)$ as feature and define the criterion for the //bone// class as $\theta_l \leq I(x, y) \leq \theta_u$. In this task, we will look for a larger set of features for each pixel, that will lead to a criterion for classes //bone//, //muscle//, //fat// and //background//. **Pixel classification** **[2 pts] Homework task** Download the {{https://www.ilastik.org/download.html| ilastik application}}. Download the input data archive {{ :courses:zsl:cv03_ctsegm.zip |}} /* {{ :courses:zsl:cv03_ctsegm.zip |}} */ In ilastik: - Create a new ''Pixel Classification'' project.{{ :courses:zsl:ilastik_new.png?400 |}} - Load the training images from the ''training'' folder within the data archive. - Select a set of features to be considered, ilastik offers pixel value, edge information, laplacian and structure tensor at different smoothing levels. {{ :courses:zsl:ilastik_feature.png?400 |}} - Define four classes: ''Background'',''Bone'',''Fat'',''Muscle'' {{ :courses:zsl:ilastik_labels.png?400 |}} - Draw some annotation examples for each class {{ :courses:zsl:ilastik_annotate.png?400 |}} - Turn on 'Live Update' to see, how the classifier recognizes the four classes, turn on the ''Uncertainty'' layer to see areas, where the classifier may need additional labels, switch also between training images, and add labels if needed. {{ :courses:zsl:ilastik_result.png?400 |}} - Save the segmentation maps for all testing images and include them in the report.