Skip to content

RFC to document de-facto LLM tool usage policy#2

Open
yaahc wants to merge 4 commits intomasterfrom
ai-tool-usage-policy
Open

RFC to document de-facto LLM tool usage policy#2
yaahc wants to merge 4 commits intomasterfrom
ai-tool-usage-policy

Conversation

@yaahc
Copy link
Copy Markdown
Owner

@yaahc yaahc commented Mar 27, 2026

Draft of RFC for LLM usage policy

Rendered

  • When commenting on this RFC if you have an issue with it please clearly state that you have an objection, what part of the RFC you're objecting to, and what shared goal it conflicts with. I will try to address and resolve all comments with clearly stated objections tied to explicitly stated goals. I will likely prioritize objections from council members since they are the ones with binding objections but I will attempt to treat all objections from active project members as similarly binding and address them all.
    • If you share a comment with a concern or a preference I will still attempt to engage but I may choose to ignore it if it feels too difficult to determine what you're getting at or if I do not agree with what I interpret as your implied objection.
  • The goal you tie your objection to does not need to be one I listed. If I agree with it I will adopt it and add it to the RFC, if I do not I will share my reason why not so we can discuss it.

Strawman example of the format of feedback I'd like to receive:

I object to this section in the RFC:

"An important implication of this policy is that it bans agents that take action
in our digital spaces without human approval, such as the GitHub @claude
agent
"

as it conflicts with our goal "Building an inclusive community where all feel welcome and respected." This section will exclude contributors who do not have sufficient time to contribute themselves and would make them feel unwelcome.

Copy link
Copy Markdown

@nikomatsakis nikomatsakis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few commits. In general, I really like this! I'll give it another read later.

opportunities for new contributors to get familiar with the codebase. Whether
you are a newcomer or not, fully automating the process of fixing this issue
squanders the learning opportunity and doesn't add much value to the project.
**Using LLM tools to fix issues labelled as "E-easy" is forbidden**.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering about this. I agree with the sentiment. And yet, consider an example like: somebody asks an LLM how to approach a bug. They review the diff. Then they go a git reset --hard HEAD and write it themselves, working through it. Is this ok?

This is approximately the process I follow, a lot of the time, when entering a new codebase.

TL;DR I think research should always be ok.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then they go a git reset --hard HEAD and write it themselves, working through it. Is this ok?

I definitely feel that this would be permitted under the policy. This goes along with the split of "its okay to use these tools to learn" but not "to do the work for you". The goal of the E-easy issues is for people to learn, if they use llms to do that then they've still accomplished the goal, even if its in a way some people would object to personally.

request description, commit message, or wherever authorship is normally
indicated for the work. For instance, use a commit message trailer like
Assisted-by: <name of code assistant>. This transparency helps the community
develop best practices and understand the role of these new tools.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we need to have a policy for team members and contributors on how to receive such contributions. Quite frankly, after this weekend, I have zero interest in transparency unless I have trust that it will be received in the spirit in which it was intended.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious how you imagine such a policy looking. I'm kinda assuming that with disclosure we're probably gonna end up with many reviewers who will simply skip over or refuse to engage with PRs that include LLM generated content and another set of reviewers that actively engage with PRs that include LLM generated content. Especially with how deeply values rooted many of these objections are I cant help but expecting this to still bleed over into other interactions and I'm not sure how much a policy could meaningfully limit that :(.

yet to be answered. Our policy on LLM tools is similar to our copyright policy:
Contributors are responsible for ensuring that they have the right to
contribute code under the terms of our license, typically meaning that either
they, their employer, or their collaborators hold the copyright. Using LLM tools
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have to be careful when it comes to legal wording. I would want to consult the Foundation's counsel for suitable wording -- but part of their argument was that indeed LLM generated text may not be copyrighted, but rather public domain -- and that is ok.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll go ahead and add a todo here and link to the comment from the foundation you linked in zulip earlier today. I already knew this was not consistent with what we've heard from the foundation but forgot to include that here before pushing.

@yaahc yaahc force-pushed the ai-tool-usage-policy branch from a192c2c to f13115f Compare March 27, 2026 22:53
Copy link
Copy Markdown

@Kobzol Kobzol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can definitely make the motivation part much stronger based on various reviewer's experience on GitHub and Zulip 😆

I like the structure of the document, and the fact that it tries to document existing policy, without also directly jumping into a new policy. Left some comments.


The Rust Project's policy is that contributors can use whatever tools they
would like to craft their contributions, but there must be a **human in the
loop**. **Contributors must read and review all LLM-generated code or text
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd personally go even further and say that text and communication (PR comments, etc.) must not be AI generated, except for edge-cases like translation. We want to be talking to humans, and not machines.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even for translation I'd still prefer that they include the original text, but yeah I don't mind if they share some AI generated translation alongside it so that each reader doesn't have to create their own translation and waste even more energy.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be resolved by 3335788

the author and is fully accountable for their contributions. Contributors
should be sufficiently confident that the contribution is high enough quality
that asking for a review is a good use of scarce maintainer time, and they
should be **able to answer questions about their work** during review.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relatedly, they should be able to answer questions about what they submitted (which might not be their work if it was generated by AI 🙃), without using an LLM. If someone asks you "why did you do X", and you have to go ask AI for the answer, you don't understand what you did.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be resolved by 3335788

rust-lang/rust are generated. Contributors should note tool usage in their pull
request description, commit message, or wherever authorship is normally
indicated for the work. For instance, use a commit message trailer like
Assisted-by: <name of code assistant>. This transparency helps the community
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that "Assisted-by: claude" does actually help reviewers all that much (though ofc it's better than guessing if AI was used). I'd suggest to contributors to explain how they used AI, even perhaps by sharing links to the conversation or the prompts, as that can be genuinely useful when reviewing that work. It also serves as feedback for the contributor themselves - if the answer to "how you used AI?" is "I pointed it to an issue and told it to fix it", then that is clearly against this policy.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be resolved by 0fa7057

cc @nikomatsakis since I know you've been working on policies / PR templates for LLM tool usage disclosure and integrations into the review process. I'm curious if there's anything else you'd suggest adding here.

@yaahc yaahc force-pushed the ai-tool-usage-policy branch from f13115f to 2426409 Compare March 31, 2026 21:22

### Disclaimer

There is not yet a full consensus within the Rust org about when/how/where it
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
There is not yet a full consensus within the Rust org about when/how/where it
There is not yet a full consensus within the Rust org about when/how/where/whether it

"when/how/where" doesn't feel like it does a good job of covering the view of "not at all"; adding "whether" feels like it helps encompass that.

Comment on lines +39 to +40
value in LLMs; many others feel that its negative impact on society and the
climate are severe enough that no use is acceptable. Still others are working
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
value in LLMs; many others feel that its negative impact on society and the
climate are severe enough that no use is acceptable. Still others are working
value in LLMs; many others feel that their negative impact on society,
climate, and existential risk are severe enough that no use is acceptable. Still others are working

s/its/their/ to match the plural "LLMs". ("its" would match "AI", if we used that.)

Also enumerating an additional risk that informs people's perspectives.

-->

- https://github.com/rust-lang/rfcs/pull/3936
- Review of open source LLM tool use policies, ordered strictness, top is most strict, bottom least. Copied from [leadership-council#273 comment](https://github.com/rust-lang/leadership-council/issues/273#issuecomment-4051188890)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This list seems like it should be moved to the "prior art" section.

Comment on lines +96 to +97
To ensure sufficient self review and understanding of the work, it is strongly
recommended that contributors write PR descriptions themselves. The description
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To ensure sufficient self review and understanding of the work, it is strongly
recommended that contributors write PR descriptions themselves. The description
To ensure sufficient self review and understanding of the work, contributors must write PR descriptions themselves. The description

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants