Blog

Local LLMs in 2026: a hardware-and-model field study

Three studies on what it actually takes to run a useful coding LLM at home: a positional-recall benchmark across Gemma 4, GLM 4.7 Flash, and Qwen 3.6 (dense + MoE); a from-scratch build of the Urlist app; and a hardware sweep across M1/M2/M5 Macs and an RX 9700 XTX.

Local LLMsBenchmarksApple SiliconAMD