This study presents a method of virtual interior design that extracts dynamic objects from input images and synthesizes them into augmented background images. In general, virtual interior design is performed at a fixed viewpoint facing the indoor envi...
This study presents a method of virtual interior design that extracts dynamic objects from input images and synthesizes them into augmented background images. In general, virtual interior design is performed at a fixed viewpoint facing the indoor environment, where we can initially take a background image that all dynamic objects such as people or chairs are excluded from the indoor environment. New textures, such as wallpaper and flooring, can be augmented with a background image and used as a new background, which is called a custom image. Our proposed method has different modules working jointly for efficient interior design. For the object area detection, we efficiently detect objects area by using the difference between the background image and the input image. It also uses background images to extract global illumination changes from the input image and apply them to custom image. And we also use the semantic segmentation in the input image for enhanced object area detection. The final results of our proposed approach give realistic experience to the users as we extract objects that even include changed scene from object area in input image and synthesize on custom image using the alpha matting. Hence in this way, our presented virtual interior design method provide realistic looking outputs to users.