首先,类似于 cGAN ,随机产生一些噪声,然后串联图像A以生成伪图像 B ^ ,之后,使用来自 VAE-GAN 的同一编码器将伪图像 B ^ 编码为潜矢量。 最后,从编码的潜矢量中采样 z ^ ,并用输入噪声 z 计算损失。数据流为 z −> B ^ −> z ^ ( 图(d) 中的实线箭头),有两个损失:
对抗损失 \(L_{GAN}\)
噪声 N(z) 与潜在编码之间的 L1损失 \(L_{1}^{latent}\)
通过组合这两个数据流,在输出和潜在空间之间得到了一个双映射循环。 BicycleGAN 中的 bi 来自双映射(双向单射),这是一个数学术语,简单来说其表示一对一映射,并且是可逆的。在这种情况下,BicycleGAN 将输出映射到潜在空间,并且类似地从潜在空间映射到输出。总损失如下:
for i in range(10):
time.sleep(1) #每隔一秒钟打印一次数据
x+=i
y=(y+i)*1.5
print(x,y)
env2.line(
X=np.array([x]),
Y=np.array([y]),
win=pane1,#win参数确认使用哪一个pane
update='append') #我们做的动作是追加
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Authors
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo
Abstract
Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN’s superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.
GAN全称:Generative Adversarial Network 即生成对抗网络,由Ian J. Goodfellow等人于2014年10月发表在NIPS大会上的论文《Generative Adversarial Nets》中提出。此后各种花式变体Pix2Pix、CYCLEGAN、STARGAN、StyleGAN等层出不穷,在“换脸”、“换衣”、“换天地”等应用场景下生成的图像、视频以假乱真,好不热闹。前段时间PaddleGAN实现的First Order Motion表情迁移模型,能用一张照片生成一段唱歌视频。各种搞笑鬼畜视频火遍全网。用的就是一种GAN模型哦。深度学习三巨神之一的LeCun也对GAN大加赞赏,称“adversarial training is the coolest thing since sliced bread”。