Moshi
Dépôt GitHub : https://github.com/kyutai-labs/moshi
Journaux liées à cette note :
Journal du jeudi 23 janvier 2025 à 14:37
#JaiDécouvert Moshi (https://github.com/kyutai-labs/moshi).
Moshi is a speech-text foundation model and full-duplex spoken dialogue framework. It uses Mimi, a state-of-the-art streaming neural audio codec.
Moshi models two streams of audio: one corresponds to Moshi, and the other one to the user. At inference, the stream from the user is taken from the audio input, and the one for Moshi is sampled from the model's output. Along these two audio streams, Moshi predicts text tokens corresponding to its own speech, its inner monologue, which greatly improves the quality of its generation.