Mike Tyka bastelt fiktive Gesichter aus zweistufigen Neural Networks: Portraits of imaginary People. Das erste generiert die herkömmlichen Kunstfressen in maximal 256x256 Pixeln, dann nutzt er ein zweites Network und generiert aus diesen Daten wiederum HighRez-Portraits. (via Prosthetic Knowledge)
the receptive field of thse networks is usually less than 256x256 pixels. One way around this is tiling combined with stacking GANs, which many people have experimented with, for example this paper uses a two-stage GAN to get high resolution: (https://arxiv.org/abs/1612.03242).
I tried a similar approach and I’ve been finally been having some more success upres-ing GAN-generated faces to 768x768 pixels in two stages and in some cases as far as 4k x 4k, using three stages. This gives them a lot more crisp detail. Since I’m trying to do this with art in mind, I don’t mind if the results are not necessarily realistic but fine texture is important no matter what even if it’s surreal but highres texture.
As usual I’m battling mode collapse and poor controllability of the results and a bunch of trickery is necessary to reduce the amount of artifacts. Specifically the second stage GAN is meta stable between smooth skin and hairy skin and often results in patchy output. For now I’m using vanilla GANs and these results are fairly cherry-picked.