Data-driven based supervised learning has been performed well in various visual tasks. As the models grow larger and deeper, large amounts and high-quality training data becomes more important. However, it is very expensive to obtain labeled data suit...
Data-driven based supervised learning has been performed well in various visual tasks. As the models grow larger and deeper, large amounts and high-quality training data becomes more important. However, it is very expensive to obtain labeled data suitable for the task and domain.
At this time, if the digital twin world is constructed similarly to real world, not only can the desired ground truth in virtual space be infinitely obtained, but also data that is difficult to obtain in the real world can be obtained easily. Of course, a model trained with synthetic data that can be obtained through simulation does not perform well because of the distribution shift between real world.
In this paper, we propose a UDAS framework that achieves high performance despite the small amount of training data using synthetic data to reduce data acquisition cost and dependency on domain training data. The UDAS framework consists of an Unsupervised Domain Adaptation which is combined with Semi-supervised learning techniques such as Self-training and Consistency regulation. In addition, Domain CutMix technique is added to reduce the domain distribution gaps. We achieved high mIoU compared to the UDA SOTA for semantic segmentation using this framework. Moreover, it has only little difference from the model trained by the supervised learning methods and it shows more stable and higher performance than the supervised learning methods when the training data is small.