MLX
The core MLX framework for arrays, neural networks, autograd, and Apple silicon-optimized ML work.
MLX is promising. The onboarding is still messy. This site fixes that with plain-English explanations, sharp curation, and concrete next steps for people entering the MLX world.
You should leave understanding what MLX is, what each thing does, and what deserves attention.
This is not a giant repo pile. It's a smaller set of entries with judgment and context.
Every page should tell you what to try next, what to skip for now, and why.
Repo soup is not onboarding. The first job is to make MLX legible.
Don't browse randomly. Start with the job you need done first, then follow the shortest path to clarity.
Understand MLX in plain English before you touch another repo.
Start with the strongest signal instead of skimming random stars.
See how framework, models, apps, docs, and tooling fit together.
Decode the terms fast so the docs stop sounding like encrypted notes.
Most MLX information assumes you already know the terrain. This site does the opposite: explain first, curate hard, then tell people what to try next.
Explain the categories in plain language so a smart newcomer stops feeling stupid.
Recommend what to test first based on intent, not based on whichever repo shouted the loudest.
Package the ecosystem in a way that's obvious, screenshot-friendly, and worth reposting on X.
The move is not “open more tabs.” The move is to reduce confusion in the right order.
Know the categories before you compare tools inside them.
For most people that means running an LLM locally first.
Examples beat speculation. Get one useful experiment working.
Interop, low-level bindings, and edge cases can wait until the basics click.
Not everything deserves your time on day one. These are the entries most likely to make the ecosystem click fast.
The core MLX framework for arrays, neural networks, autograd, and Apple silicon-optimized ML work.
The most practical entry point for running and fine-tuning language models in the MLX ecosystem.
A repo of example projects and demos showing how MLX gets used in practice.
The documentation hub for installation, APIs, and core MLX concepts.
A major source of MLX-compatible model weights so you can try models without conversion drama.
A community project for running vision-language models with MLX.
MLX-native implementations of modern image generation models on Apple Silicon.
Think of MLX less like one app and more like a stack: official foundation, model-running lanes, native app tooling, multimodal experiments, and a few real apps built on top.
The base layer: MLX itself plus the core repos Apple maintains around it.
Where you learn, ask questions, and keep up with what the ecosystem is doing.
The easiest place for newcomers to get a real win: run a model locally or adapt one.
The hubs where compatible weights and model listings live so you can actually test things.
For builders who want MLX inside native Mac or iPhone apps instead of only Python scripts.
The lane where MLX stops being just text and starts touching images, speech, and mixed inputs.
The lower-level or bridge repos that matter once you care about portability, deployment, or integrations.
Real products and interfaces built on top of MLX that make the ecosystem feel tangible.