TAEWON KANG

itsc.kr

(2019) Unsupervised Image-to-Image Translation with Self-Attention Networks

Representative Project
WORK

Unsupervised Image-to-Image Translation with Self-Attention Networks (arXiv: 1901.08242)

Taewon Kang, Kwang Hee Lee

Unsupervised image translation aims to learn the transformation from a source domain to another target domain given unpaired training data. Several state-of-the-art works have yielded impressive results in the GANs-based unsupervised image-to-image translation. It fails to capture strong geometric or structural changes between domains, or it produces unsatisfactory result for complex scenes, compared to local texture mapping tasks such as style transfer. Recently, SAGAN (Han Zhang, 2018) showed that the self-attention network produces better results than the convolution-based GAN. However, the effectiveness of the self-attention network in unsupervised image-to-image translation tasks have not been verified. In this paper, we propose an unsupervised image-to-image translation with self-attention networks, in which long range dependency helps to not only capture strong geometric change but also generate details using cues from all feature locations. In experiments, we qualitatively and quantitatively show superiority of the proposed method compared to existing state-of-the-art unsupervised image-to-image translation task.

Download: PDF


   
In this paper, we propose a self-attention image-to-image translation networks which allows long range dependency modeling for image translation task with strong geometry change.
Combined with self-attention, the generator can translate images in which fine details at every position are carefully coordinated with fine details in distant portions of the image. Furthermore, the discriminator can also more accurately enforce complicated geometric constraints on the global image structure. In experiments, we show superiority of the proposed method compared to existing state-of-the-art unsupervised image-to-image translation task.

To explore the effect of the proposed self-attention mechanism, we built several SAGAN blocks by adding the self-attention mechanism to different stages of the generator and discriminator.


[cat2dog]
In the process of changing the image of a cat(domain A) to a dog(domain B) image, CycleGAN is unable to generate a dog image, since it only takes the color from the image. In the case of DRIT, there is a problem that the image is broken, and it is hard to see it as a dog image reflecting the shape and direction of a cat and dog. In the case of UNIT and MUNIT, the contour feature was not reflecting.

 
[img2cat and img2dog]
In the process of changing the human face image(domain A) presented in cat and dog(domain B), CycleGAN is not able to derive shape and contour. In the case of DRIT, the shape and contour of the cat and dog is becoming distorted and the eyes of the cat are not reflected properly, so the cat and dog image cannot be generated. In the case of UNIT and MUNIT, a person’s face is kept so badly that it is hard to see a cat coming out.
 
[cat2img and dog2img]
We also experimented to convert cats and dogs to humans in order to show that they can be transformed well in the wild image dataset, not just the dataset generated by simply cropping. This result also shows that human face image are properly derived from the gaze, face outline, and appearance of image. all previous works not reflecting shape and contour of cat and dog. The results for reverse can be seen in Figure. (Domain A: cat and dog, Domain B: human face image)


[portrait]
Even at the stage of changing the portrait(domain A) shown in Figure to a face(domain B), CycleGAN has not been able to convert portrait photos to face at all. In the case of DRIT, conversion is not performed by generating irrelevant images. In the case of UNIT and MUNIT, there is a problem that the image is distorted although it reflects the shape.


[edges2shoes]
In the process of changing the edges image(domain A) presented in Figure to a shoes(domain B) image, CycleGAN, DRIT generated image is broken. In the case of UNIT and MUNIT, the model can not create shoes image by keeping only the shape of the photograph.

However, our model generated more realistic results keeping the pose and style of A domain than the results from other models.


[Ablation Study]
In this section, experiments are conducted to evaluate the effectiveness of the self-attention(SA) networks of unsupervised image-to-image translation. In this figure, self attention unsupervised image-to-image translation models “SA on downsampling layer (DS-layer)”, “SA on upsampling layer (US-layer)”, “SA on DS-layers X 3 / US-layers X 3” are compared with a our “SA on DS-layer / US-layer” model. In case of “SA on DS-layer”, “SA on US-layer”, “SA on DS-layers X 3 / US-layers X 3”, the image is broken. However, “SA on DS-layer / US-layer” model, when comparing the original human face picture and the cat picture which is the result photograph, it can be seen that the cat was properly derived from the pose of the image, and our model generated more realistic images than other methods. Based on this experiments, we applied “SA on DS-layer / US-layer” to our model.

[User Study]
For the qualitative evaluation, we also conducted a user study on 80 participants. The results of this study are summarized as follows. First, 192 images were selected randomly in the questionnaires, and the questionnaires were used to select the best image that reflects characteristics of Input Image and reflects shape and contour well. This figure shows the results of the survey presented below show that the method of this study reflects the characteristics of the image better than the existing GAN model.

[Quantitative Evaluation Analysis]
We used Fréchet Inception Distance (FID) to measure the distance between train data and generated data distributions using the features extracted by the inception networks. The lower FID score indicates that the two image are similar. To obtain a low FID score, a model needs to generate images that are both high image quality and diversity. We take the generated images and compute the FID score between train data(trainB data) and generated data(AtoB image translation result) If the size of an image is different between the two distributions, we resize images of two distributions same. This table shows the results of the FID score analysis, and our model generated more similar image than other image-to-image translation methods.

[Conclusion]
In this paper, we proposed a method about unsupervised image-to-image translation with self-attention networks, in which long range dependency helps to not only capture strong geometric change but also generate details using cues from all feature locations.

In experiments, we showed superiority of the proposed method compared to existing state-of-the-art unsupervised image-to-image translation methods.

댓글 남기기