Forscher haben ein Neural Network auf die „Sprach“-Muster der Ultraschall-Sounds von Ratten trainiert und daraus nun eine Software entwickelt, die die Analyse der Rattenkommunikation enorm erleichtert. Bemerkenswerterweise haben sie dafür die Rattensounds visualisiert und existierende Computer-Vision-Algorithmen angewandt, das aber nur am Rande.
Man erhofft sich von dem Algorithmus langfristig eine Art Sprachschlüssel für „Rattensprache“ oder in anderen Worten: Das hier ist ein algorithmischer Ratten-Babelfisch.
The Verge: Meet DeepSqueak, an algorithm built to decode ultrasonic rat squeaks
Science Daily: ‘DeepSqueak’ helps researchers decode rodent chatter
Paper: DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations
Many researchers realize that mice and rats are social and chatty. They spend all day talking to each other, but what are they really saying? Not only are many rodent vocalizations unable to be heard by humans, but also existing computer programs to detect these vocalizations are flawed. They pick up other noises, are slow to analyze data, and rely on inflexible. rules-based algorithms to detect calls.
Two young scientists at the University of Washington School of Medicine developed a software program called DeepSqueak, which lifts this technological barrier and promotes broad adoption of rodent vocalization research.
This program takes an audio signal and transforms it into an image, or sonogram. By reframing an audio problem as a visual one, the researchers could take advantage of state-of-the-art machine vision algorithms developed for self-driving cars. DeepSqueak represents the first use of deep artificial neural networks in squeak detection.
Rodents engage in social communication through a rich repertoire of ultrasonic vocalizations (USVs). Recording and analysis of USVs has broad utility during diverse behavioral tests and can be performed noninvasively in almost any rodent behavioral model to provide rich insights into the emotional state and motor function of the test animal. Despite strong evidence that USVs serve an array of communicative functions, technical and financial limitations have been barriers for most laboratories to adopt vocalization analysis.
Recently, deep learning has revolutionized the field of machine hearing and vision, by allowing computers to perform human-like activities including seeing, listening, and speaking. Such systems are constructed from biomimetic, “deep”, artificial neural networks. Here, we present DeepSqueak, a USV detection and analysis software suite that can perform human quality USV detection and classification automatically, rapidly, and reliably using cutting-edge regional convolutional neural network architecture (Faster-RCNN). DeepSqueak was engineered to allow non-experts easy entry into USV detection and analysis yet is flexible and adaptable with a graphical user interface and offers access to numerous input and analysis features. Compared to other modern programs and manual analysis, DeepSqueak was able to reduce false positives, increase detection recall, dramatically reduce analysis time, optimize automatic syllable classification, and perform automatic syntax analysis on arbitrarily large numbers of syllables, all while maintaining manual selection review and supervised classification. DeepSqueak allows USV recording and analysis to be added easily to existing rodent behavioral procedures, hopefully revealing a wide range of innate responses to provide another dimension of insights into behavior when combined with conventional outcome measures.