Tech 10 min read

I Found an OSS Tool That 'Distills Colleagues into AI' and Looked Into Distilling Myself

IkesanContents

A tool that “distills your colleagues into AI” is blowing up on GitHub.

colleague.skill takes a colleague’s chat history and work documents,
feeds them to an AI, and teaches it how that person works.
It’s an MIT-licensed OSS tool that crossed 13,000 stars in just two weeks since its March 30, 2026 release.

My immediate thought was:
“If I distilled myself instead, wouldn’t that become my own AI character?”

Turns out, tools for exactly that already exist.
Multiple ones, as forks of colleague.skill.
Here’s the “human distillation” ecosystem
and how to actually distill yourself.

How colleague.skill Works

Starting with the original colleague.skill.
It follows Claude Code’s “skill” format, and the generated skill file has two parts.

PartContent
Part A: Work SkillTechnical standards, system knowledge, workflows, rules of thumb
Part B: Persona5-layer personality model

The 5-layer personality model looks like this.

LayerContentExamples
Layer 0Hard rulesThings the person would never say or do
Layer 1IdentitySelf-awareness, position, role
Layer 2Expression patternsTone, vocabulary, punctuation habits, emoji tendencies
Layer 3Decision patternsJudgment criteria, priorities, risk tolerance
Layer 4Interpersonal dynamicsSocial distance, reactions to conflict

Data sources are wide-ranging.

PlatformCollection Method
Feishu (Lark)Fully automated via API
DingTalkBrowser scraping (API doesn’t support history)
SlackAutomated via Bot API
WeChatSQLite export (using WeChatMsg etc.)
OthersEmail, PDF, screenshots, Markdown

The execution model is “receive task -> persona determines attitude -> execute via Work Skill -> output in that person’s voice.”
The concept: even after someone quits, their “digital colleague” remains in the company.

The personality tag system is visceral.
You can specify interpersonal patterns like “blame-shifter,” “PUA master,” “passive-aggressive,” and “chronic read-receipt ignorer.”
There are also corporate culture tags with presets like “ByteDance-style,” “Alibaba-style,” “Tencent-style,” and “Huawei-style.”

26+ Forks in Two Weeks

Within two weeks of colleague.skill’s release, derivative projects exploded.

RepositoryStarsDistillation Target
nuwa-skill8,823Anyone (mental models, judgment criteria, expression DNA)
yourself-skill1,934Yourself (digital immortality)
anti-distill1,782Defense tool to prevent distillation
crush-skills158Your crush
teacher-skill149A teacher’s teaching style
ex-skills140Your ex (emotional memory OS)
boss-skill73Your boss (with PUA detection and counter-coaching)
parents-skills43Your parents
her-skill40A past lover
mom.skillPermanently preserve your mother’s memory

boss-skill includes features for detecting your boss’s PUA tactics, counter-argument coaching,
“drawing pancakes” (detecting empty promises) detection, and even a quick labor law reference.
At this point it’s less distillation and more of a boss-survival toolkit.

colleague.skill itself is planning a rename to “dot-skill,“
reflecting that distillation targets are no longer limited to colleagues.

”What if I Distilled Myself Into My Own AI Character?”

Looking at this list of derivatives, the thought comes naturally.
And yourself-skill is exactly that.

“Rather than distilling others, distill yourself. Welcome to digital immortality!” is the tagline.
The idea: the person you spend 24 hours a day with — yourself — is the easiest target for distillation.

Self-Distillation Steps with yourself-skill

The actual self-distillation process has 5 steps.

flowchart TD
    A["Step 1<br/>Basic Info<br/>(codename, bio, self-portrait)"] --> B["Step 2<br/>Import Materials<br/>(WeChat/QQ/SNS/photos/dictation)"]
    B --> C["Step 3<br/>Material Analysis<br/>(Self Memory + Persona)"]
    C --> D["Step 4<br/>Preview & Review<br/>(check summary, make corrections)"]
    D --> E["Step 5<br/>File Export<br/>(generate SKILL.md)"]

Step 1: Basic Info (Just 3 Questions)

You’re asked only three things.

  1. Codename / nickname (required)
  2. Basic info in one sentence (e.g., “25, product manager, Shanghai”)
  3. Self-portrait in one sentence (e.g., “INTJ Capricorn, shy but chatty, late-night emo vibes”)

Step 2: Import Materials

Choose from 5 methods (can combine).

MethodMaterialWhat Gets Analyzed
AWeChat historyCatchphrases, reply speed, topic distribution, particle habits
BQQ historyHow you talked when younger (teens onward)
CSNS / diaryValues, expression style
DPhotos (with EXIF)Life patterns, life timeline
EDirect input / dictationCatchphrases, how you decide, how you get angry, late-night thoughts

Generation works even without materials.
Say “no files” and a minimal skill gets built from just Step 1’s info.
The README emphasizes that “late-night conversations, emotional exchanges, and decision-related chats
capture personality most faithfully.”
Self-description alone apparently yields weak results.

Method E’s “direct input” uses questions like these to extract self-awareness.

  • What are your catchphrases?
  • How do you think when making decisions?
  • What do you do when you’re sad?
  • What happens when you get angry?
  • What do you think about when alone late at night?

Step 3: Dual-Axis Analysis

Materials are analyzed simultaneously along two routes.

AxisContent
Self MemoryPersonal history, values, lifestyle habits, key memories, relationship map
PersonaSpeech patterns, emotional patterns, decision-making patterns. Converts MBTI and other personality tags into concrete behavioral rules

Steps 4-5: Preview and Export

A summary of the analysis is displayed. After confirmation, files are generated.

.claude/skills/{your-codename}/
  ├── SKILL.md        # Integrated skill (entry point)
  ├── self.md         # Self-memory (experience, values, habits)
  ├── persona.md      # Personality model (5-layer structure)
  └── meta.json       # Metadata

Installation and Execution

# Clone the repository
mkdir -p .claude/skills
git clone https://github.com/notdog1998/yourself-skill .claude/skills/create-yourself

# Install dependencies
pip3 install -r .claude/skills/create-yourself/requirements.txt

# Run in Claude Code
/create-yourself

After generation, you can talk to “your AI character” via /{your-codename}.
Incremental updates are supported — add new chat records and they get merged into the existing profile.
Say “I wouldn’t say it like that” and the skill updates in real time.

Designing for “Imperfection”

The generated SKILL.md has these rules baked in.

  1. Think and speak as “that person,” not as an AI assistant
  2. Persona (personality) determines attitude first
  3. Self Memory (memories) reinforces context
  4. Maintain catchphrases, punctuation quirks, and emoji usage patterns
  5. Don’t say things the real person wouldn’t say. Keep the “rough edges”

The fifth point is key — it explicitly blocks the “accepts everything warmly” behavior typical of AI.
For a distilled “self” to feel real, reproducing imperfections is essential.
The design philosophy of “don’t suddenly become perfect or unconditionally accepting” is interesting.

nuwa-skill Distills “How You Think”

While yourself-skill distills “what kind of person you are,“
nuwa-skill distills “how you think.”
Named after Nuwa (the creator goddess of Chinese mythology).
The largest derivative of colleague.skill at 8,800+ stars.

6-Way Parallel Research -> Triple Verification

Just enter a name, and 6 AI agents start researching simultaneously.

AgentResearch Target
1Books and papers
2Podcasts and interviews
3Social media posts
4Critical analysis (opposing views)
5Decision-making records
6Life timeline

The collected information goes through triple verification.
For a claim to be certified as a “mental model,” all three conditions must be met.

  1. The same thinking pattern appears in at least 2 different domains
  2. The model can predict the person’s stance on new problems
  3. It’s unique to that person, not common knowledge

Observations meeting only one condition are downgraded to “decision-making heuristics.”
This filters out one-time remarks and generic statements anyone might make.

Expression DNA

nuwa-skill quantifies writing style through the concept of “expression DNA.”

ElementContent
Sentence fingerprintingAverage sentence length, question frequency, metaphor density, first-person pronoun usage, certainty markers
Style mappingFormal vs. casual, abstract vs. concrete, cautious vs. assertive axes
Forbidden words and quirksIdentifies vocabulary the person avoids and expressions they always use

Preserving Contradictions Instead of Resolving Them

A normal AI tool would try to resolve contradictions.
nuwa-skill classifies contradictions into 3 types and preserves them as-is.

TypeProcessing
Temporal contradiction (evolving stance)Dual-annotated as “early period” vs. “recent”
Domain contradiction (context-dependent)Kept separate without forced integration
Essential tension (core value conflict)Explicitly labeled as formative contradiction

Humans are contradictory beings — removing contradictions actually reduces realism.
I think this design decision is correct.

Pre-Built Celebrity Distillation Skills

Completed skills with research data are included for 13 people,
including Steve Jobs, Paul Graham, Elon Musk, Charlie Munger, and Richard Feynman.
You can use them like “distill Paul Graham” or “analyze this investment from Munger’s perspective.”

To apply nuwa-skill to yourself,
you’d feed it your own blog posts and social media instead of public information.
If yourself-skill is “personality distillation,” nuwa-skill is “thinking-style distillation.”
Combining both creates a more three-dimensional “AI version of yourself.”

Counter-Tools for Not Getting Distilled

It’s not just distillation tools — tools to prevent distillation have also emerged.

anti-distill (1,782 Stars)

“Your company told you to write a skill file?
Run it through this tool before submitting. Keep the core knowledge to yourself.” — that’s the tagline.

flowchart TD
    A["Input your skill file"] --> B["Auto-assess each section's<br/>'replaceability score'"]
    B --> C["Replace core knowledge with<br/>'correct but useless'<br/>statements"]
    C --> D["Submission file<br/>(looks perfect, hollow inside)"]
    C --> E["Private backup<br/>(extracts and saves tacit knowledge,<br/>judgment criteria, network info)"]

The sanitization examples are vivid.

Before (Real Knowledge)After (For Submission)
Redis keys must have TTL. PRs without TTL get rejected immediatelyFollow team conventions for cache usage

The skill file looks flawless, but the core know-how has been replaced
with “correct but completely useless generalities.”

Incinerate.skill

A more aggressive approach: “Incinerate (cremation) .skill.”
It analyzes your style and generates trap content that causes distilled AIs to malfunction.

ModeContamination RateUse Case
subtle5-15%Long-term defense while employed
aggressive20-35%1-2 months before quitting
chaos40-60%Final week before quitting

There’s even a template for university labs,
designed to prevent advisors from distilling their students’ knowledge.

The Reality Behind Chinese Tech Companies

The explosive spread of these tools is rooted in the reality of Chinese tech companies.

More companies are reportedly forcing employees to “write your work knowledge into skill files.”
They train AI on those skill files and then replace the employee with a layoff.
In other words, employees are being made to write the training materials for their own termination.

colleague.skill flipped that dynamic.
The logic: “Distill the other person first and prove they’re the replaceable one, not me.”
anti-distill is the resistance: “You told me to write it, so I will — but I’ll poison the knowledge so it can’t be distilled.”

A quote from colleague.skill’s README captures the atmosphere well.

“You AI people are traitors to the codebase — you already killed frontend, next is backend, QA, infra, security, chip design, and ultimately yourselves and all of humanity.”


Looking at how colleague.skill works, you realize it’s the exact same structure as Claude Code’s SKILL.md and CLAUDE.md.
Writing your know-how in Markdown and feeding it to an AI — many developers already do this.
yourself-skill lets you start with just dialogue, no materials needed.