Semester work

The task is to implement a simple image processing algorithm with medical imaging applications or apply it to some (bio)medical imaging task. It should take you about 10 hours, so it should not be completely trivial.

You are encouraged to choose your own task; otherwise it will be assign to you by the instructors. In both cases, the students are expected to write a short description of their task, and send it to their lab instructor for approval. The tasks must be proposed and approved by November 13 (8th week).

The implementation must be yours, written from scratch. You are not allowed to use any existing code, or trivial modifications thereof. Such attempts will be considered a plagiarism and will result in a failure from the course. Libraries can be used for supporting tasks such as the deep learning infrastructure, numerical methods or classifiers, if they are not the focus of the project. Library functions may be used for common tasks (e.g. reading or displaying image, solving systems of linear equations etc.). AI tools can only be used according to the declared policy. If in doubt, check with the instructor.

The required output of the project is the implementation itself and a short report, describing the task in the context of the state of the art methods, the method and its implementation and how to use it, and the experimental results.

Submission

The report and the code must be uploaded as one archive to BRUTE as a solution of the assignment titled Semester work by January 6, 2025 (i.e., until January 6, 23:59:59). The report is expected to be about 3-5 pages.

The students will present their works the last week of the semester on January 8, 2025. Each presentation is expected to be 5-8 minutes long to allow a short discussion after. The presentation need not be submitted to BRUTE.

Tips

  • Students are encouraged to consult intermediate results with the teachers.
  • Written reports look better when prepared in LaTeX.
  • All sources should be properly cited.

Project ideas

  • Active contour segmentation based on edge terms: Implement from scratch, test on medical data such as heart ultrasound or MRI, brain ventricles, blood or other cells.
  • Active contour segmentation based on intensity (Chan-Vese): Implement from scratch with a contour-based implementation (easier), test on medical data such as heart ultrasound or MRI, brain ventricles, blood or other cells.
  • Level set segmentation: Implement a Chan-Vese method using level sets (for more challenge add narrow-band acceleration), test on medical data such as heart ultrasound or MRI, brain ventricles, blood or other cells.
  • Gradient vector flow segmentation: Implement from scratch, test on medical data such as heart ultrasound or MRI, brain ventricles, blood or other cells.
  • Segmentation with active shape model: Generate shape model, e.g., from lungs segmentation or heart ultrasound (e.g., the CAMUS dataset), and fit this model to find edge points by optimizing the rigid transformation parameters and shape parameters.
  • Active appearance model: Train an active appearance model on, e.g., the CAMUS ultrasound dataset. Build the active shape model on the boundaries (provided), register images to the same template (perhaps using an existing registration tool), and build the active appearance model (using PCA). Demonstrate that you can generate new images. All code should be written from scratch except the registration.
  • Autoencoder shape model: Train an autoencoder as a shape model using boundary representation. Use existing libraries such as PyTorch. Compare the approximation error on the test dataset with the PCA approach. Try to interpolate between two shapes. Try to generate new shapes. The autoencoder can be variational.
  • Finding superpixels: Implement a superpixel algorithm other than SLIC, e.g., Felzenszwalb and Huttenlocher, Veksler et al., or mean-shift (see references in the SLIC paper). You may use an existing max-flow/min-cut algorithm.
  • Superpixel segmentation: Use an existing library to find superpixels (e.g., via SLIC). Build a graph connecting neighboring superpixels, assign unary energy based on a Gaussian model, assign binary energy based on intensity differences. Segment using existing GraphCut/maxflow implementation. Test on some segmentation tasks, such as MRI brain slices (segment to white matter, gray matter, CSF, bone, background).
  • GraphCut segmentation: Implement a max-flow/min-cut or some other optimization algorithm for the labeling problem from scratch and use it for simple binary image segmentation - with Gaussian intensity model and edge weights decaying based on intensity differences. Test on segmenting some medical data such as blood cells, brain ventricles, heart (muscle vs. chambers).
  • Random walker: Implement from scrach (you can use mathematical libraries for iterative solution of the linear system), test on some medical segmentation tasks such as brain MRI slices (segment to white matter. gray matter, CSF, bone, background).
  • Texture classification using textons: Segment images by SLIC superpixels, evaluate texture features using wavelet descriptors for each superpixel, and assign superpixels to classes from training data. Test on suitable medical data, such as ultrasound images of the carotid artery or heart or thyroid nodules, or miscroscopy images of the Drosophila eggs.
  • Anatomically constrained neural network: Implement the 'anatomically constrained neural network' method in 2D using existing PyTorch libraries, test on an existing dataset (such as the X-ray images of the lungs or the CAMUS ultrasound heart dataset) or build a synthetic dataset with 'organs' of different shapes (e.g., triangles vs circles) and texture. See if the shape prior is useful.
  • Cell nuclei detection: Implement a simplified Al-Kofahi method - thresholding based on the Otsu method or learnt from the data, Laplacian-of-Gaussian (LoG) filtering, GraphCut or watershed to break touching nuclei. Test on microscopy cell images.
  • Cell nuclei detection using simplified deep regression: Implement and train a simple CNN to regress the 'distance' from the background (test distance, normalized distance, exponentially transformed distance…) and apply it to cell detection as in Naylor et al. Test on microscopy cell images.
  • Deep regression based retina vessel segmentation: Implement and train a simple CNN to regress the 'distance' (or transformed version thereof) from the vessel centerline as in Sironi et al. (which does not use deep learning). Use non-maxima suppression in the normal direction. Optionally trace the vessel centerline using a shortest path algorithm, e.g., Dijkstra. Test on retina fundus images.

Public datasets

courses/zmo/semestral/start.txt · Last modified: 2024/11/11 13:57 by barucden