This project is aimed at extracting information related to the spread of aerosols directly from image data. This application is particularly interesting in monitoring the spread of infections through air. We can track the growth of the aerosol cloud through time and obtain useful information. Obtaining image data by performing experiments with human subjects in a controlled experiment is a challenging task. We instead rely on simulations which offer us greater control over the aerosol parameters, as well as providing metadata automatically. These simulations can be generated using a 3D rendering software, like Blender. Once the image and ground truth data has been obtained, we train a U-Net, which is an encoder-decoder based architecture for image segmentation. It uses convolutions, transposed convolutions, and max pooling layers, along with skip connections. We can train the U-Net to focus only on the aerosol cloud and not on the human subject or other objects in the image. Using the segmented image, we can track the aerosol temporally and extract features.
«