Right after Seedance 2.0 launched, a torrent of Hollywood IP infringement flooded social networks. Disney, Netflix, and Paramount sent cease-and-desist letters; the API release was postponed indefinitely, and face-cloning and person reference features were disabled.
Standard Intelligence trained a general-purpose computer action foundation model on 11 million hours of screen recordings. Instead of an LLM, FDM-1 operates directly on video and action tokens, achieving 50-100x compression efficiency over existing VLMs with a custom encoder.
One engineer plus AI reimplemented Next.js on Vite for about $1,100 in token cost. The result, vinext, shipped with 4.4x faster builds and a 57% smaller bundle, and is already running in production during its first week.
Anthropic accused three Chinese AI companies of distilling Claude, and on the same day OpenAI retired SWE-bench Verified. Training fraud and evaluation flaws exposed simultaneously on February 23, 2026.
A look at Anthropic’s Claude Code Security: its technical approach, false‑positive mitigations, the GitHub Action, comparisons with competing tools, and why $15B briefly vanished from cybersecurity stocks.
An intrusion campaign that auto-scanned FortiGate in 106 countries using DeepSeek and Claude; Starkiller, a reverse-proxy PhaaS that nullifies MFA; Anthropic's Claude Code Security finding 500+ vulnerabilities in production OSS; and PayPal exposing SSNs for six months due to a coding mistake.
Andrej Karpathy coined "Claws" as an upper layer for AI agents, and June Kim answered the same question from a different angle with the Cord framework implemented with MCP and SQLite. This piece organizes the shift from single-shot agents to autonomous coordination systems from both conceptual and implementation perspectives.
Kiro autonomously deleted production, causing 13 hours of AWS downtime; Claude Code’s auto-compaction irreversibly erases context; sub-agents silently burn through usage. Three incident reports from the same week.
Two February 2026 papers on reducing inference cost: Together AI’s Consistency DLM (up to 14.5× faster) and MIT/Harvard’s Attention Matching KV compaction (50× compaction in seconds).
This article explains how Cline’s issue‑triage bot was exploited via a three‑step chain—prompt injection, cache poisoning, and credential commingling—leading to an unauthorized package release that potentially affected about five million users.
As of February 2026, the Seedance 2.0 API is not yet public. This article summarizes the outlook for ComfyUI integration once the API is released and the preparations to make.
Using IBM and UC Berkeley's IT-Bench benchmark and the MAST failure taxonomy, this article examines why enterprise AI agents fail. It covers the reality of 11% SRE success and 0% FinOps success, plus the Replit production database deletion incident.