
OUR MISSION IS TO
BUILD AND DEMOCRATIZE
ARTIFICIAL GENERAL INTELLIGENCE
THROUGH OPEN SCIENCE
AI RESEARCH LAB BASED IN PARIS
Cascaded Voice AI


Speech-Native Models

Moshi is the first speech-native dialogue system, unveiled during our first keynote. Moshi processes speech directly rather than converting to text and back, which means it has minimal latency, and can understand emotions, and other non-verbal aspects of communication.
Moshi extends seamlessly to multimodal inputs: we showcase this with MoshiVis, a Moshi that you can talk to about images.
Moshi's multi-stream paradigm also enabled us to create Hibiki, an end-to-end real-time streaming translation system, lightweight enough to run on a phone.

Neural Audio Codecs

Encoding and decoding signals in a compressed yet accurate manner is a cornerstone of modern AI systems. Our streaming neural audio codec Mimi can efficiently model both semantic and acoustic information while achieving real-time latency. Originally developed for Moshi, Mimi is now a key component of all our audio projects.
If you want to dive deeper, check out our tutorial on neural audio codecs. It builds from the basics all the way to modern codecs like Mimi, with plenty of examples and animations along the way.

Compact Language Models

