Two definitions, both common, both useful, both wrong if you mix them up:
AI-using. A team where some people sometimes use AI for some tasks. The marketing site mentions Cursor. Engineers have ChatGPT subscriptions reimbursed. There's an "AI tools" Slack channel where occasionally someone shares a prompt. Most companies are AI-using right now.
AI-first. A team where AI is in the critical path of how work happens. Standups are synthesized, not written. Code review starts with an AI pass. Specs are drafted by AI then heavily edited by humans. Customer support uses AI as the first responder, not just the suggestion engine. Removing AI from the workflow would meaningfully break the team's day.
These look similar from the outside. They're not the same.
The marker isn't tooling. It's which artifacts are AI-touched first. In an AI-using team, humans write things and AI helps. In an AI-first team, AI writes things and humans curate, validate, edit, decide.
This matters for two reasons.
First, the skills that matter are different. AI-using teams reward speed and tool fluency. AI-first teams reward editorial judgment, prompt design, and the ability to spot when the model is confidently wrong. These are different career ladders. We hadn't internalized this until we tried to assess "AI proficiency" with a single rubric — it kept producing nonsense, because the same engineer can be excellent at one and average at the other.
Second, and harder: existing performance evaluation systems implicitly assume AI-using. "How many tickets did you close?" assumes the engineer wrote the ticket close. "How many lines of code shipped?" assumes the engineer authored those lines. Once AI is the first author, these metrics measure something else entirely — they measure either curation throughput (which is real and valuable) or AI billing, depending on how cynical you want to be.
RUQA is built for the AI-first case. The synthesis assumes AI is reading the same signals you are, the capability scoring assumes the human contribution is editorial judgment, the triangulation assumes you have to verify human-claimed work because the human didn't write most of the artifact. These choices don't make sense for an AI-using team. For an AI-first team, they're the only choices that make sense.
If you're not sure which one you are: look at what got committed to your repo this week. If a human wrote the first draft and AI helped, you're AI-using. If AI wrote the first draft and a human pruned, edited, and shipped, you're AI-first. There's no shame in either, but the tooling that fits one doesn't fit the other.