Multimodal AI Models on Apple Silicon with MLX with Prince Canuma - #744

Multimodal AI Models on Apple Silicon with MLX with Prince Canuma - #744

Today, we're joined by Prince Canuma, an ML engineer and open-source developer focused on optimizing AI inference on Apple Silicon devices. Prince shares his journey to becoming one of the most prolific contributors to Apple’s MLX ecosystem, having published over 1,000 models and libraries that make open, multimodal AI accessible and performant on Apple devices. We explore his workflow for adapting new models in MLX, the trade-offs between the GPU and Neural Engine, and how optimization methods like pruning and quantization enhance performance. We also cover his work on "Fusion," a weight-space method for combining model behaviors without retraining, and his popular packages—MLX-Audio, MLX-Embeddings, and MLX-VLM—which streamline the use of MLX across different modalities. Finally, Prince introduces Marvis, a real-time speech-to-speech voice agent, and shares his vision for the future of AI, emphasizing the move towards "media models" that can handle multiple modalities, and more. The complete show notes for this episode can be found at https://twimlai.com/go/744.

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
forklart
hva-star-du-for
stopp-verden
aftenpodden-usa
popradet
nokon-ma-ga
fotballpodden-2
dine-penger-pengeradet
det-store-bildet
frokostshowet-pa-p5
aftenbla-bla
e24-podden
unitedno
rss-dannet-uten-piano
rss-ness
rss-penger-polser-og-politikk
liverpoolno-pausepraten
rss-borsmorgen-okonominyhetene