SwiftLM, an Apple Silicon–only MLX inference server, provides a native Metal implementation of TurboQuant V2+V3 hybrid KV‑cache compression and NVMe SSD expert streaming.
Bundling NDLOCR-Lite's DEIMv2 + PARSeq with ONNX Runtime Mobile in an iOS app to run camera capture → perspective correction → layout detection → text recognition → confidence-based correction entirely on device.