Semi-Supervised Semantic Image Segmentation
Team: Bishesh Khanal and Pratima Upretee
Supervised deep learning methods have seen tremendous progress with several successful applications since the resurgence of neural networks in the beginning of the last decade. They are becoming the default choice in popular image-based tasks such as image classification, object detection, and semantic image segmentation, when large amount of annotated training data are available. However, when the annotated labels are not in large numbers, which is often the case in applications such as biomedical imaging, these networks struggle to match the performance of supervised settings. Recently, semi-supervised learning methods for image classification has seen exciting progress, getting close to the performance of supervised counterparts. Data annotation for semantic segmentation is much more laborious than for classification, and hence would benefit more from semi-supervised learning. However, semi-supervised semantic segmentation has not seen similar success yet, compared to image classification. Semi-supervised learning use some form of regularizations such as consistency regularization to exploit the smoothness or cluster assumption i.e. same class images lie clustered together and different classes are separated by decision boundaries that lie in low density regions. There are ongoing studies to better understand how the smoothness or cluster assumptions hold, or explore better regularizations and augmentation techniques for semantic segmentation.
We are building upon these recent efforts and push the boundaries of semi-supervised semantic segmentation to move closer to the performance of supervised settings. While the challenges and approaches we explore will be for semantic segmentation tasks for generic images, our applications will focus on biomedical imaging for global health where getting labeled data is much more challenging.
Manuscript in preparation.