PatchGAN is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss.
Source: Image-to-Image Translation with Conditional Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Translation | 118 | 14.25% |
Image-to-Image Translation | 103 | 12.44% |
Image Generation | 50 | 6.04% |
Domain Adaptation | 33 | 3.99% |
Semantic Segmentation | 30 | 3.62% |
Style Transfer | 27 | 3.26% |
Super-Resolution | 14 | 1.69% |
Denoising | 13 | 1.57% |
Image Segmentation | 12 | 1.45% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |