Question

Vector arithmetic in the latent space has been demonstrated to produce meaningful output image samples from a trained DC-GAN in the paper by Chintala et al. In fact, the vector arithmetic they described was not directly over the individual vectors themselves in the latent space, which they report to give an unstable output image. Instead they had first taken an average of the vectors per images that looked the same before demonstrating that Man with Spectacles - Man + Woman = Woman with Spectacles. But how relevant is this feature in the new Wasserstein GANs with Gradient Penalty introduced by Gulrajani et al.? Since these are much improved versions of the DC-GANs, can the vector arithmetic be done now directly over the individual vectors in the latent space? Will appreciate if some relevant works can be pointed out on using vector arithmetic on trained WGAN-GP models.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top