Changes from v1 to v2 of Kana Chat, an AI agent built around official CLI wrappers. Covers dual-model router, Heartbeat memory, planner mode, image input, speech transcription, PWA push notifications, and the lessons learned from a month of daily use.
Flash-MoE is a C/Metal inference engine that runs Qwen3.5-397B-A17B on a MacBook Pro M3 Max at 4.36 tokens/s. With expert streaming from SSD and hand-written Metal shaders, it fits the 209GB model into a 48GB memory budget.
The three-stage pipeline of BERT perplexity scan → LLM judgment → escalation packaged as a cross-platform Python tool. The installer automatically downloads llama-server and GGUF models.
Redesigned with inference latency as the first priority, Mamba‑3 combines exponential trapezoid discretization, complex‑valued states, and a MIMO structure to reach about 6.9× the speed of a Transformer at 16,384 tokens.
Cursor released Composer 2 without disclosing its base model; calling its OpenAI-compatible API revealed it is Kimi K2.5. This escalated into a licensing dispute, but a formal commercial agreement with Moonshot AI was subsequently confirmed.
H Company's Holotron-12B uses a memory-efficient new design to lift PC-operation AI throughput to 8,900 tokens per second. Unsloth has released the beta of 'Studio,' a browser tool for no-code model fine-tuning.
Anthropic has GA’d a 1M‑token context window. No surcharge for long context; image/PDF per‑request limit raised from 100 to 600. Achieved a frontier‑model best score on MRCR v2.
Sarvam AI released 30B and 105B models trained entirely in India—from pretraining through RL—featuring support for 22 constitutionally recognized Indian languages and inference optimizations.
Bundling NDLOCR-Lite's DEIMv2 + PARSeq with ONNX Runtime Mobile in an iOS app to run camera capture → perspective correction → layout detection → text recognition → confidence-based correction entirely on device.
Node.js version manager Volta reached end-of-maintenance in November 2025. A breakdown of PATH conflicts with tools such as Claude Code and Codex, complete Volta uninstall steps, and a comparison of migration targets.
Attempting WAN 2.2 I2V video generation on Windows with RTX 4060 8GB VRAM. The 5B fp8 model had rough quality; the 14B Rapid distilled model with lowvram offloading was the practical solution.