Neural Network Camera draws what it sees
Dan McNish hat sich sowas wie eine AI-Polaroid gebaut, die Objekt-Erkennung und Googles Quickdraw-Experiment zusammenflanscht und ihre Bilder in gezeichneter Form ausgibt. Nice! (via Imperica)
One of the fun things about this re-imagined polaroid is that you never get to see the original image. You point, and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw. The result is always a surprise. A food selfie of a healthy salad might turn into a enormous hotdog, or a photo with friends might be photobombed by a goat. […]
The camera is a mash up of a neural network for object recognition, the google quickdraw dataset, a thermal printer, and a raspberry pi. Initially, I began with some experiments on my laptop. I set up an image processing pipeline in python to take pre-captured images and recognise the objects in them, using pre-trained models from google. At the same time, I explored the quickdraw dataset, and mapped the categories available in the dataset with the categories recognisable by the image processor. After writing some code to patch the two together, wrapping the lot in a docker image, and cobbling together some electronics, interspersed with some hair pulling moments of frustration, the camera was ready.