Or: before you blame the players, check the rules.
I once watched a team of smart, well-intentioned people spend six months optimizing a metric that didn’t matter. Not because they were confused about what actually mattered. Because the metric was what got discussed in leadership reviews. Everyone on the team could have told you what would move the business. They also knew it wouldn’t show up on the dashboard their boss checked every Monday.
The system was working exactly as designed. It was just designed wrong.
When leadership noticed the stagnation, the response was predictable: restructure the team, replace the manager, bring in someone with “fresh perspective.” Six months later, the new manager’s team was optimizing the same metric. The metric hadn’t changed. The dashboard hadn’t changed. The leadership review hadn’t changed.
There’s a name for this pattern. Drug policy researchers call it the kingpin strategy: the DEA spent decades and billions of dollars capturing cartel leaders, declaring victory at the press conference, then watching a new leader step into the role within weeks. The trafficking volume never budged. Overdose deaths didn’t drop. The demand that sustains the market doesn’t care who’s running supply.
Organizations run the same play when they fire the “bad” manager, reorg the “dysfunctional” team, or bring in consultants to “fix the culture.” The dysfunction regenerates, because nobody changed the incentive structure that produced it.
The better question is never “who’s causing this?” It’s: what game are they playing, and what does winning look like?
Rational Dysfunction
Framing organizational problems as game design problems sounds cynical. Assuming people are rational within their constraints is more respectful than assuming they’re stupid or malicious. The cynical move is blaming individuals for systemic outcomes.
When a product team ships features nobody uses, the instinct is to question the PM’s judgment. But look at the incentive structure: shipping velocity is the metric. Customer adoption isn’t. The team’s performance reviews, promotion cases, and quarterly updates all reference how many features they delivered. Not how many people used them.
Of course they keep shipping. The game rewards shipping.
This is Goodhart’s Law at organizational scale: when a measure becomes a target, it ceases to be a good measure. The metric was supposed to be a proxy for value creation. The moment it showed up on a dashboard, it became the goal itself.
Performance reviews are the same game at the individual level. They reward confidence. The person who says “I was wrong about the Q2 strategy, here’s what I learned, here’s what I’d do differently” gets rated lower than the person who says “my strategy is working” and presents three cherry-picked graphs. The first person is being honest. The second is playing the game correctly.
I’ve written before that “the organizational response to being wrong is to get more consistent about being wrong.” That describes a stable equilibrium: when the cost of admitting error exceeds the cost of being wrong, every rational person stays wrong. Nobody breaks the pattern alone, because the system punishes whoever goes first.
The Games You’ll Recognize
Start with promotions. In most companies, senior roles are limited and the criteria for earning one are vague. When the rules are unclear and the slots are scarce, rational people optimize for what they can control: visibility and perception.
Two PMs are competing for the same senior role. One takes over an adjacent team’s initiative and positions it as their own win. The other quietly builds infrastructure that makes three teams more effective. When the promotion committee meets, the first PM has a clean narrative: “I led this initiative and here are the results.” The second PM’s contribution is invisible by design, spread across everyone else’s numbers.
The first PM gets promoted. Everyone watching just learned what the organization actually rewards. Margaret Heffernan calls this the Superchickens problem: selecting for individual performance produces more aggressive teams, not better ones.
Empire building works the same way. When the path to VP runs through headcount, every director grows their team regardless of whether the work requires it. More people, bigger title, bigger budget. The thought experiment most leaders avoid: “What would happen if this team didn’t exist?”
Then there’s the game between heroics and prevention. Fixing a production outage at 2am is visible, dramatic, and gets you thanked in the all-hands. Redesigning the system so the outage never happens is invisible. Nobody thanks you for the crisis that didn’t occur. Game theorists call this a stag hunt: everyone is better off cooperating on the long-term goal, but each person is tempted by the smaller, guaranteed win they can capture alone. Without explicit incentives for prevention, rational people choose heroics every time.
Organizations run dozens of these games simultaneously. The skill is learning to see them: when you encounter behavior that looks irrational, ask what game rewards it.
Designing Better Games
The instinct when you see a broken game is to add rules. More process, more oversight. This usually makes things worse. Rules create new games: who can navigate the process fastest, who has the relationship to get exceptions.
Better to change the structure of the game itself.
Two kids are splitting a cake. One cuts, the other picks first. The cutter is incentivized to cut evenly without anyone enforcing fairness. The structure does the work.
Organizations can apply the same principle. Separate the hiring manager who advocates for a candidate from the committee that evaluates against a standard. Make sure the person who sets a metric isn’t the same person measured by it. Write down what success looks like before the project starts, not after. Each removes a conflict of interest without adding bureaucracy.
When predictions are written down and falsifiable, the conversation shifts from “who tells the most convincing story” to “who was actually right.” When progress is visible to everyone, it’s harder to claim credit for someone else’s work or bury problems until they’re someone else’s problem.
Politics thrives on fog. Transparency burns it off.
None of this creates motivation. Good incentive design can’t make people care about their work. But bad design destroys it in months. A fair game protects the motivation people already have. That’s the job.
Seeing the Game
Most people never see the game. They just feel the frustration of losing it. Someone who keeps getting passed over for promotion doesn’t think “the incentive structure rewards visibility over infrastructure.” They think “I’m not good enough” or “this place is political.” Both framings lead to the wrong response: work harder at the wrong thing, or give up.
The diagnostic question is simple: how would things turn out if everyone acted this way? If the answer is badly, the structure is broken. If one person’s success requires another’s failure, the game is zero-sum and needs to be redesigned. If the only way to advance is to expand your empire, every team will bloat. If the only visible work is firefighting, nobody will invest in prevention. The behavior is telling you what the game rewards.
Sometimes the fix is structural. Redesign the promotion criteria, change the metrics, separate the roles. But sometimes the fix is creating room. Two directors fighting over the same scope isn’t a people problem. There isn’t enough surface area for both of them to win. The leadership move is to open new ground, not referee the fight.
Intellectual honesty is a competitive advantage. That’s true, but only if the game rewards honesty. Most don’t. An organization that punishes people for admitting error and rewards people for performing confidence will never get intellectual honesty, no matter how many values posters it prints. The game has to change first.
Before you blame the players, check the rules.