Automated and Interactive Segmentation for Medical Image Data
The project aims to provide innovative methods for the automatic segmentation of organs in medical image data. We use image processing and computer vision techniques with medical information coming from US volume and MR images for the 3D reconstruction and visualisation of organs. These automatic segmentation methods can be integrated in the current systems for image-guidance interventions in order to limit the interaction of the clinician during clinical procedure.
Spatiotemporal visual contrast sensitivity to complex scenes
This is a 4-year project, which started running at the CVIT in November 2011. It deals with the measurement and modeling of spatiotemporal human vision using pictorial stimuli.
The aims have been to establish new methodologies for visual spatiotemporal contrast measurements that are directly relevant to the way humans perceive contrast in natural scenes, or images of them. Also to provide a novel visual model framework, linking the structure of real scenes to all individual components of the imaging chain, including the neural mechanisms of human vision.
It is truly multidisciplinary project that syndicates expertise in visual, computer and imaging sciences and is funded by the Ministry of Defence’s Defence Science and Technology Laboratory (DSTL).
Check the ITRG Publications for published work on the project.
Content-based Video Retrieval
The aim of the project is to provide a fully automatic and computationally efficient framework for robust video segmentation and the classification of these videos based on active regions behaviour. The methodology consists of the following: video segmentation, active region extraction, generation of active region patterns, determination of activity model, and finally retrieval.
Abnormal behaviour recognition of Human Activities
Yong-Li Yang, Alexandra Psarrou
The aim of the project is to automatically identify unusual behaviours in cluttered environments. The methodology consists of building a model to detect and track moving targets, identify their behaviours in different contexts, discriminate between normal and abnormal behaviours, long-time tracking and fusion of data from more than one camera.