Tech Feb 15, 2026 updated 5 min Optimizing VRAM and Memory Allocation on Strix Halo for Local LLMs How to configure VRAM/main memory split on the GMKtec EVO-X2 (Strix Halo) for local LLM inference. A 29.6GB model ran fine with just 8GB of dedicated VRAM. AI LLM Memory Optimization AMD LM Studio Experiment