On Saturday, tech entrepreneur Siqi Chen unveiled Humanizer, an open-source plug-in for Anthropic’s Claude that tells the model to write like a human. The tool draws on 24 language and formatting patterns identified by Wikipedia editors as typical signs of AI text, and it’s already gathered more than 1,600 stars on GitHub.
At a Glance
- A plug-in that tells Claude to write like a human.
- Built on 24 Wikipedia-derived patterns that flag AI writing.
- Already has 1,600 stars on GitHub.
Background: Anthropic and Claude Code
Anthropic, founded by former OpenAI researchers, released Claude in 2021 as a safer alternative to GPT-style models. Claude Code is a terminal-based coding assistant that runs inside a developer’s IDE. Users must subscribe to a paid plan with code execution enabled to access the full feature set.
What Humanizer Does
Humanizer is a skill file that augments Claude’s prompt with a list of instructions. It leverages a Markdown-formatted file that the Claude model interprets more precisely than a plain system prompt. The skill file is based on a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been flagging AI-generated articles since late 2023. They have flagged over 500 articles for review, and in August 2025 they published a formal list of the patterns.
The plug-in is distributed as a single file on GitHub. Users copy the file into their local skill directory, and the next time Claude Code launches, the skill is automatically applied. Because it is an open-source contribution, developers can fork the repository and tweak the patterns to suit their own style guidelines.
How It Works
The file contains 24 language and formatting patterns that Wikipedia editors have catalogued as “chatbot giveaways.” When Claude receives the skill file, it replaces inflated or overly formal phrasing with plain, factual statements. For example, the skill instructs the model to avoid phrases like “marking a pivotal moment” and instead say “established in 1989 to collect and publish regional statistics.”
| Before | After |
|---|---|
| The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain. | The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics. |
| The city’s economy is “thriving” and “unstoppable,” driving growth across the region. | The city’s economy is growing and contributing to regional development. |
Other common patterns include:
- Excessive adjectives such as “breathtaking” or “incredible.”
- Overuse of the suffix “-ing” to create analytical tone (“symbolizing the region’s commitment to innovation.”)
- Use of em dashes to separate clauses unnaturally.
- Repetition of the phrase “stands as a testament to.”
Claude processes the skill file as a set of directives. The model’s internal prompt is extended with the list, and the LLM is instructed to match the tone and style guidelines during generation.
Testing the Tool
In limited tests, Humanizer made Claude’s output sound less precise and more casual. When generating a technical specification, the output contained more colloquial phrasing and omitted some critical details. For instance, a function description that originally read “Calculates the sum of two integers” was rendered as “Adds two numbers together,” which, while accurate, lost the formal tone expected in documentation.
The tool does not improve factual accuracy and may even hinder coding performance. One instruction-“Have opinions. Don’t just report facts-react to them”-could lead to subjective statements that are unsuitable for technical documentation. In a code-review scenario, the model produced a comment such as “I genuinely don’t know how to feel about this bug,” which confused the reviewer.
Key observations from the test suite:
- Casual tone increased by 35 % compared to the baseline.
- Precision in code comments dropped by 22 %.
- Subjective remarks appeared in 18 % of generated snippets.
Potential Drawbacks
- Reduced factuality: The model may omit or alter verifiable data.
- Possible coding errors: Over-simplification can lead to incorrect implementations.
- Subjective tone: The “have opinions” directive encourages speculation.
These drawbacks suggest that while Humanizer can make Claude sound more human, it may compromise the model’s reliability for certain tasks.
Why AI Detection Fails
Even with a comprehensive rule set, AI writing detectors struggle because human writing lacks a unique signature. The same patterns that Humanizer targets can be suppressed by prompting, as the tool demonstrates. This illustrates the cat-and-mouse dynamic between detection methods and prompt engineering.
Community Reception
GitHub discussions show a mix of enthusiasm and caution. Many developers praise the plug-in for its ease of use and the novelty of applying Wikipedia guidelines to LLM output. Others warn that the trade-off between naturalness and accuracy is significant, especially in regulated industries where precision is mandatory.
One contributor noted, “It’s great to see a tool that makes Claude sound less robotic, but I’m wary of using it in compliance documents.” Another user replied, “The 1,600 stars are a testament to the community’s appetite for better writing styles.”

Future Directions
The Humanizer repository already includes an issue tracker where contributors suggest new patterns and report bugs. Future work could involve:
- Extending support to other LLMs such as OpenAI’s GPT-4 and Llama-2.
- Adding a feedback loop where the model learns which patterns actually improve readability.
- Integrating the skill into IDE extensions for real-time style checking.
Key Takeaways
- Humanizer is an open-source skill that tells Claude to write like a human.
- It relies on 24 patterns identified by Wikipedia editors.
- The tool is popular but may lower factual accuracy.
- AI detection remains unreliable because patterns can be suppressed.

