문제

I can't seem to convince myself why a GAN model similar to regGAN couldn't be modified to maximize a regression predictor (see the image below). By changing the loss function to the difference between the current predicted value and the maximum predicted value generated so far, wouldn't gradient decent converge such that the generator builds the inputs that will maximize the prediction in the Discriminator CNN?

In math terms, the loss calculation would look like:

  yhat = current prediction
  ymax = best prediction achieved yet
  Loss = ymax - yhat
  if Loss < 0 then Loss = 0; ymax = yhat
  Back-propagate the loss using SGD

If the current predicted value is higher than the maximum predicted so far, then the loss is 0 and the loss function is updated. Essentially, we are changing the objective from generating inputs that look real to generating inputs that optimize the complex function encoded in the CNN.

GAN Network ArchitectureDiscriminator

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top