An AI Usage Policy for Zettlr
For the past three years, society has witnessed the introduction of generative artificial intelligence in the form of generative pre-trained transformers (GPTs), often simply “Large Language Models” (LLMs), into our everyday lives. This new technology has profoundly impacted how we work and interact, and it will continue to shape society for many years to come.
What started with a press release by OpenAI in November 2022 has by now spiraled into a race to the bottom for large technology corporations like Google, Meta, or Microsoft, and has led to a sprawling ecosystem of locally run LLMs, primarily around the tool llama.cpp. As such, almost everyone who possesses a computing device will use one or more of these tools at least once a week. As such, it is important for everyone, including us at Zettlr, to develop ways to dealing with this new technology in a productive and forward-looking way.
After having observed how people use LLMs and testing out these new tools for ourselves over the past years, it is time that Zettlr adopts its own AI Usage Policy.
Our AI usage policy is centered around two principles: Honesty and inclusion. We do not wish to exclude the usage of AI, since that is simply not a viable stance in 2026. At the same time, we want to enforce an honest usage of these new tools in line with ethical and legal principles.
In this blog post, we wish to introduce the new AI Usage Policy, where to find specifics, and, most importantly, the spirit behind this policy.
Scope
The scope of our policy pertains only to LLMs. Artificial Intelligence has been around for as long as we had computers, and as such, there is nothing new in the availability of AI more generally. What is new is the addition of GPT-models based on the transformer architecture, and with it new abilities for people of all ages and at all stages of technological prowess to participate in the development of Open Source software. As such, we want to properly regulate the usage of generative models that allow users to generate both natural language and code at scale.
Furthermore, this policy only pertains to the Zettlr ecosystem; more specifically our codebases and the communication within the community. On the one hand, we want to be clear on how we envision the usage of LLMs in generating code for Zettlr repositories. On the other hand, we want to ensure that the ability of LLMs in generating natural language does not interfere with human communication.
Both parts of this policy are now outlined in two separate documents: all regulations pertaining to software code can be found in the CONTRIBUTING.md-file in the main repository, while all regulations pertaining to communication are contained in the CODE_OF_CONDUCT.md-file.
AI Usage Policy: Code Contributions
The core object of any AI Usage Policy in the Open Source space is the code itself. With the advent of coding agents, it has become easy to author changes to a codebase, depending on circumstances even without ever touching the code itself. This can make it easier for less tech-savvy people to start contributing to Open Source, but it also comes with costs. First, it abstracts the developer from the code. For those who already know a codebase in and out, this can lead to tremendous time-savings. But for someone who has little experience with coding, using coding agents can be the start of a dependence on the tool to do any work. Second, despite improvements in context availability for coding agents, LLMs still have a hard time correctly connecting the logic of a piece of software to the necessary code changes to implement a feature. This can lead to overly verbose or inefficient code that degrades the user experience. Third, since LLMs have been trained on vast amounts of often unethically or illegally sourced text, the more an agent generates, the higher the possibility of accidentally introducing copyright issues.
Against this backdrop, Zettlr’s AI Usage Policy reconciles the increasing prevalence of “agentic coding,” often also called “vibe coding” with the necessity to maintain a performant code base that serves its users well. The AI Usage Policy that applies specifically to contributing code to Zettlr repositories can be found in the CONTRIBUTING.md-file, in a separate section. Essentially, it boils down to this: You are free to use as much or as little LLM-generated code as you please, but by opening a PR you take all responsibility for the code, and you confirm that you personally understand every single line of code the tool has generated to ensure it works as advertised.
We require every contributor to fully own code suggested for inclusion in the Zettlr codebase, without any exceptions. If some code turns out buggy or inefficient, this will never be the fault of the coding tool, but of the person operating it. Since machines cannot take responsibility, we thereby pre-empt any ambiguities of accountability. You open a PR — you are responsible. Full stop. It is never the code of some coding tool; it is always your code.
Besides this clear demarcation of responsibility and accountability, which is located exclusively with the developer and no third party, we also expect full disclosures of AI tools. If you used a coding agent to help you code, you are required to disclose this when you open a PR. A corresponding new section has been added to the PR template for you to fill in. Please do not go into excruciating detail, but do provide specifics where applicable. Examples of AI disclosure could be:
- “AI has been used to generate parts of this code. I have manually verified all generated code for accuracy and performance.”
- “No AI has been used to generate any code in this PR.”
- “AI has been used to understand the code base and find the correct entry points for implementing this fix.”
- “AI has been used to suggest solutions for fixing this issue.”
Those are just examples, and we have no specific wording-policy. In terms of what we expect, please take some inspiration from how academia has tackled this issue. We believe that there are some great examples that demonstrate the why and how of declaring AI usage. Example search for AI Disclosure Statements.
Note, however, that we may ask you to be specific. For the vast majority of cases, a generic disclosure will be sufficient, but sometimes — especially in cases of questionable code quality — it may be important that you also specify the tools, and which parts of the code exactly were generated by AI. In those cases, we will comment on your PR with an appropriate comment. Do not worry about this when first opening your PR.
This policy intends to set clear expectations towards the contributors of Zettlr. We are explicitly vague in where and how you can use LLMs to generate code, because we do not believe that there are certain specific tasks that should be regulated. Instead, we ask you to apply reason. We have already seen some “vibe coded” PRs on the repository, but the issue was never just the generated code — it was primarily the stance of the contributor towards our team. Some people have taken the ability to use coding agents as a confidence boost, and trying to convey necessary changes before a PR could be merged was sometimes met with undue resistance.
This leads to a second part of the intention behind this policy: act in accordance with ethical principles. We hereby primarily refer to the principle of “Respect for Persons” which ethical frameworks such as the Belmont Report have outlined. We expect decency from every contributor, and we will not let you defer accountability for your code to some coding agent. Even if you have used an LLM to generate the code in your PR, you are still communicating with humans who will make the judgment.
AI Usage Policy: Communication
This leads to the second part of this AI Usage Policy: the advent of LLMs has not just increased the speed of writing code, but also transformed human communication. We are witnessing more and more people on the repository and within the broader community who are apparently passing instructions to an LLM and paste its generated text as their own answer in, say, a comment on GitHub.
We recognize that for many people, English is not the first language, and as such the fact that English is the primary language within most parts of the community can seem daunting for many. However, we do believe that we all have unique ways of expressing ourselves. If you let an LLM generate responses for you, they are likely in perfect English, but they also bulldoze your own personality.
We wish to see more grammatical errors or typos on the repository, and less LLM-generated text. You can find specifics in the file CODE_OF_CONDUCT.md, but in essence, we would like to encourage you to formulate your own thoughts — even if they are not in perfect English. We treat mocking people due to mistakes as a serious infraction of our code of conduct and will prosecute this swiftly. Because nobody should feel bad for their language skills. The solution is never to let LLMs write responses in perfect English, but in enforcing inclusive behavior.
If you take a look at some older issues, you will see that users have reported bugs in other languages than English. Our answer was always to kindly ask them to translate it, but never to close the report (because it doesn’t become invalid just because it uses the “wrong” language), mock them, or engage in any other form of derogatory behavior. Communication is not a binary “Do it right the first time” matter. It has nuances, and we expressly encourage “mistakes.”
Unlike the AI policy that applies to code contributions, we do not enforce this strictly. Again, we recognize that for some people, even our firm commitment to preventing any form of harassment might not be enough to give them the confidence to participate in the discussions without help. And if the choice is between LLM-responses or no response at all, we choose the LLM-responses. But we would like to encourage you to try to communicate without the help of LLMs.
Final Thoughts
This is just the first stab at this AI Usage Policy. There are likely cases we did not yet consider, and new problems will appear on the horizon as society adjusts to the existence of generative models. But one has to start somewhere, and this is where we decided to start.
We may have also missed things while drafting this policy, and there might be a need to discuss details of this policy. If you do believe that there are things which require further discussion, please do not hesitate to communicate your concerns and thoughts on our community forum or Discord-server.