Every "AI tools survey" report I've read in the past year has the same flaw. They ask engineers which tools they use. They count usage. They publish percentages. And then six months later the chart is wrong, because the tool the chart was about has been deprecated, acquired, or replaced.
This is fine for marketing reports. It's catastrophic for performance evaluation.
When the company decides who gets a raise based on which tools someone is "proficient" in, you've baked tool churn directly into your compensation system. The engineer who became a Cursor expert in early 2025 looks like a star until Cursor v3 changes its model and everyone pivots.
We took a different approach. Instead of asking "which tools," we ask "which meta-skills?" The list is short and durable:
- Prompt design. Can you write the input that gets the right output? Doesn't matter what model.
- Edge case detection. Can you spot the inputs the model will fumble? Universal.
- Decision rationale. Can you explain why you picked this approach? AI accelerates speed; only humans defend choices.
- System decomposition. Can you split a large problem into solvable chunks? Older than software.
These don't go stale. The model changes; the skill of using it well doesn't.
The catch: you can't measure meta-skills with a survey either. People over-rate themselves. So RUQA scores them from real outputs — what you actually shipped, what decisions you made, what edge cases you caught — not what tool the work happened in. Tool-agnostic by construction.
Track tools if you're a marketer. Track meta-skills if you're trying to know who actually understands the work.