Invisible Cities, ein Neural Network, das Satellitenbilder aus Gekritzel generiert. Funktioniert so ein bisschen wie Style-Transfer, nur für Satellitenpics.
In this project, we trained a neural network to translate map tiles into generative satellite images. We trained individual models for several cities–Milan, Venice, and Los Angeles–allowing us to do city map style transfer (example above) by applying the aerial model of one city onto the map tiles of another.
Sort of related: Image-to-Image Translation with Conditional Adversarial Nets und Imaginary landscapes using pix2pix, „a brand-new tool which is intended to allow application-independent training of any kind of image transform. […[ The operations work in both directions, so with a properly trained network one can generate a plausible satellite view from a map, or a building facade from a colored segmentation.“