GGUF
Hugging Face Hub supports all file formats, but has built-in features for GGUF format, a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. GGUF is designed for use with GGML and other executors. GGUF was developed by @ggerganov who is also the developer of llama.cpp, a popular C/C++ LLM inference framework.
Journaux liées à cette note :
Journal du jeudi 06 juin 2024 à 16:20
En travaillant sur 2024-06-06_1047 :
- #JaiDécouvert https://github.com/PABannier/bark.cpp - Suno AI's Bark model in C/C++ for fast text-to-speech (from)
- #JaiDécouvert https://github.com/karpathy/llm.c - LLM training in simple, raw C/CUDA (from)
- #JaiLu au sujet de GGUF :
Hugging Face Hub supports all file formats, but has built-in features for GGUF format, a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes. GGUF is designed for use with GGML and other executors. GGUF was developed by @ggerganov who is also the developer of llama.cpp, a popular C/C++ LLM inference framework.
https://huggingface.co/docs/hub/gguf
- #JaiDécouvert llama : add pipeline parallelism support by slaren autrement dit « Multi-GPU pipeline parallelism support » (from)
- #JaiDécouvert https://github.com/ggerganov/whisper.cpp de Georgi Gerganov
- #JaiDécouvert https://github.com/ggerganov/llama.cpp/discussions/3471
- #JaiDécouvert la Merge Request d'ajout du support de ROCm Port : ROCm Port 1087 (from)
- #JaiDécouvert Basic Vim plugin for llama.cpp
- #JaiDécouvert https://github.com/rgerganov/ggtag par le même auteur que Llama.cpp, c'est-à-dire Georgi Gerganov
- #JaiDécouvert Distributed inference via MPI - Model inference is currently limited by the memory on a single node. Using MPI, we can distribute models across a locally networked cluster of machines.
- #JaiDécouvert : d'après ce que j'ai compris la librairie ggml est le composant de base de Llama.cpp et Whisper.cpp
- #JaiDécouvert que Georgi Gerganov a lancé sa société nommée https://ggml.ai (from) et que celle-ci est financé entre autre part Nat Friedman ! Ha ha, encore lui 😍.
ggml.ai is a company founded by Georgi Gerganov to support the development of ggml. Nat Friedman and Daniel Gross provided the pre-seed funding.
We are currently seeking to hire full-time developers that share our vision and would like to help advance the idea of on-device inference. If you are interested and if you have already been a contributor to any of the related projects, please contact us at jobs@ggml.ai
- #JaiDécouvert Text-to-phoneme-to-speech https://twitter.com/ConcreteSciFi/status/1641166275446714368, j'adore 🙂