All variants of huihui-ai's Qwen 3.5 abliterated produced garbage tokens. GLM-4.7-Flash abliterated had a broken chat template. The official version with thinking disabled turned out to be the right answer.
Right after Seedance 2.0 launched, a torrent of Hollywood IP infringement flooded social networks. Disney, Netflix, and Paramount sent cease-and-desist letters; the API release was postponed indefinitely, and face-cloning and person reference features were disabled.
After a macOS update, tmux sessions started by cron lost access to the Keychain, causing Claude CLI batch jobs to silently fail. Diagnosing the issue, the fix, and why this is a structural macOS Keychain problem rather than a Claude CLI bug.
Experiment log: from LUKE/BERT fill-mask fine-tuning, to perplexity-based error detection, to Qwen2.5 7B correction judgment with human escalation on mismatch. A complete pipeline running on a single RTX 4060 Laptop with 8GB VRAM.
From Docker hell to Lite + LLM correction. A retrospective on my own experimentation, plus an introduction to someone else's browser-based NDLOCR-Lite implementation.
Covers Cisco SD-WAN authentication bypass and UAT-8616's three-year campaign, NuGet/npm supply chain attacks, and Claude Code/Desktop Extensions/Mexico government breach.
Standard Intelligence trained a general-purpose computer action foundation model on 11 million hours of screen recordings. Instead of an LLM, FDM-1 operates directly on video and action tokens, achieving 50-100x compression efficiency over existing VLMs with a custom encoder.
Set up the CLI version of NDLOCR-Lite on Apple Silicon Mac, then tested OCR result correction with Qwen 3.5 and Swallow. Includes experiments with direct image reading and the anchoring effect.
Trend Micro analyzed a new AMOS distribution method that targets AI agent workflows. A malicious SKILL.md on OpenClaw plants fake CLI install instructions and uses AI as the intermediary to manipulate people.
One engineer plus AI reimplemented Next.js on Vite for about $1,100 in token cost. The result, vinext, shipped with 4.4x faster builds and a 57% smaller bundle, and is already running in production during its first week.