Linux Kernel Source Tree Gets Official AI Coding Assistant Policy
Contents
A new document, Documentation/process/coding-assistants.rst, has been added to the Linux kernel source tree. It officially defines the rules for contributing to the kernel using AI coding assistants, and drew significant attention on Hacker News with 264 points and 173 comments.
The document was added by NVIDIA’s Sasha Levin, co-maintainer of Linux LTS kernels. It was committed on December 23, 2025 and approved by documentation maintainer Jonathan Corbet. The commit message explicitly states it is “based on the consensus reached at the 2025 Maintainers Summit.”
Discussion at the 2025 Maintainers Summit
The prototype of this policy was shaped at the 2025 Linux Kernel Maintainers Summit. Sasha Levin presented the basic framework, and proposals from Stoakes and Jiri Kosina led to further discussion.
Linus Torvalds himself weighed in, acknowledging that “LLM-generated code can carry legal concerns” while maintaining that the existing developer certification process could handle responsibility issues.
He also noted that AI code generation hadn’t yet reached a large scale in kernel development.
What was particularly interesting in the discussion was the shared recognition that AI tools were already being used productively beyond code generation.
Jens Axboe reported an instance where an automated tool caught an inverted conditional branch bug that had slipped past three human reviewers.
Starovoitov reported that roughly 60% of automated reviews were high quality, with an additional 20% containing useful insights.
This context connects to Google’s Sashiko release. Sashiko is an AI system for automated review of Linux kernel patches, reportedly detecting 53.6% of known bugs that had passed human review (previous article).
The Summit as a whole leaned toward transparency and cautious experimentation rather than strict regulation.
Three Rules
The merged coding-assistants.rst is a straightforward document that establishes three main rules.
AI Agents Must Not Add Signed-off-by
This is where the document uses its strongest language.
AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO).
In the Linux kernel, every patch requires a developer’s Signed-off-by tag. This constitutes signing the DCO (Developer Certificate of Origin)—a legal certification that “this code is GPL-2.0 compatible” and “I have the right to submit it.”
AI agents lack the capacity to perform this legal certification. The human submitter bears responsibility for all of the following.
| Responsibility | Description |
|---|---|
| Code review | Review all AI-generated code |
| License verification | Verify GPL-2.0-only compatibility |
| DCO signing | Add their own Signed-off-by tag |
| Full accountability | Take complete responsibility for the contribution |
Attribution via the Assisted-by Tag
When AI assistance is used, the commit message should include an Assisted-by tag in the following format.
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
The document gives Assisted-by: Claude:claude-3-opus coccinelle sparse as an example. AGENT_NAME is the AI tool name, MODEL_VERSION is the model version used, and items in brackets indicate companion static analysis tools (coccinelle, sparse, smatch, clang-tidy, etc.). Basic tools like git, gcc, make, and editors are not listed.
License and Development Process Compliance
Code generated by AI tools must be compatible with GPL-2.0-only and use appropriate SPDX license identifiers. Compliance with standard kernel development processes (coding-style.rst, submitting-patches.rst, etc.) is equally required.
Tag Evolution and Why Co-developed-by Was Rejected
In Sasha Levin’s initial RFC (July 2025), the Co-developed-by tag was considered for AI attribution. However, following proposals from David Hildenbrand and Konstantin Ryabitsev, it was changed to Assisted-by.
The reason is clear—consistency with the Linux kernel’s existing tag system.
| Tag | Meaning |
|---|---|
| Signed-off-by | Legal DCO certification (required) |
| Reviewed-by | Statement of code review |
| Acked-by | Maintainer approval |
| Tested-by | Statement of testing |
| Reported-by | Credit for bug reporter |
| Co-developed-by | Co-developer (used in pair with Signed-off-by) |
| Assisted-by | AI assistance attribution (newly added) |
Co-developed-by has a rule requiring a corresponding Signed-off-by paired with it.
This contradicts the policy of not allowing AI to add Signed-off-by, so a new tag was needed.
By framing it as “assistance” rather than “co-development,” the position that AI is not an author was made explicit.
The Assisted-by tag was initially rejected by checkpatch.pl (the kernel’s patch verification script), but Sasha Levin addressed this in a v2 patch, which Joe Perches approved.
Points of Discussion on HN
The Hacker News comments played out along two axes: assessment of the policy and copyright concerns.
Top commenter qsort called it “refreshingly normal.”
The point being that simply requiring responsibility and license compliance is a baseline most people can get behind.
pibaker responded that “the fact that something this obvious needs to be spelled out says everything about the scale of the AI contribution problem.” The backdrop is developers submitting AI-generated patches without understanding the code.
Copyright risks stemming from AI training data were also a major point. sarchertech argued that guaranteeing GPL compliance is impossible when using AI trained on diverse sources. The counterargument: humans face the same risk of unconsciously reproducing code they’ve seen before.
The DCO mechanism doesn’t eliminate infringement risk itself—it clarifies where responsibility lies by transferring legal liability to the human submitter.
Unresolved Issues from the Summit
Not everything was settled in the Summit discussions.
Konstantin Ryabitsev raised alarms about dependence on proprietary AI systems.
He cited the BitKeeper precedent. The proprietary version control tool once used for the Linux kernel lost access due to licensing issues.
That experience famously led to the creation of Git, and the point that similar dependency risks apply to AI tools carries weight.
Shuah Khan pointed out that access to expensive AI tools is skewed toward corporate-affiliated developers.
Individual contributors develop without AI assistance while corporate developers have access to full-stack AI tooling.
This is a problem that could affect the kernel community’s diversity.
An opt-out mechanism for AI-assisted reviews also remains undefined. There is currently no established means for maintainers who don’t want their patches reviewed by AI to refuse it.
AI and OSS Contribution Friction
The growing burden of AI-generated code on open source projects isn’t unique to the Linux kernel. Jeff Geerling warned that “AI is breaking open source,” and the curl project has seen low-quality AI contributions become a visible problem (previous article).
The Linux kernel’s approach chose to have humans bear full responsibility and disclose AI usage rather than banning it outright.
The Assisted-by tag is recommended rather than mandatory—Sasha Levin himself stated at the Summit that “enforcement is deliberately avoided.”
coding-assistants.rst is a 49-line document, but how this framework actually functions will be answered by future commit logs.