Decision 2: What Dies to Fund This?

Every budget discussion pretends resources are infinite. They're not. Every dollar spent on AI is a dollar not spent somewhere else. Every engineer building AI isn't fixing production systems. Every hour in AI planning meetings is an hour not solving today's operational fires.

The question isn't "Can we afford this AI project?" The question is "What do we stop doing to afford it?"

Most organizations won't answer. They add the AI project on top of everything else. Everyone gets stretched thinner. Quality drops across the board. The AI project gets whatever's left over—which is never enough. So it limps along, underfunded and under-resourced, until someone kills it or it becomes Dr. Strangebot.

Here's what killing something actually looks like: Northern Star Mining has three maintenance engineers. They currently maintain equipment, respond to failures, and optimize preventive maintenance schedules. That's their job. All of it matters.

Northern Star wants to build the predictive maintenance AI. It needs one of those engineers full-time for nine months. Not "whenever they have time." Not "20% of their capacity." Full-time. Which means the other two engineers now cover the work three people used to do.

So what dies? Option one: preventive maintenance optimization. The engineers stop improving the preventive maintenance schedules. They maintain existing schedules but don't optimize them. That means slightly higher maintenance costs—maybe 3-5% more than optimal. Northern Star accepts this. They're trading 3-5% higher costs this year for 15% lower costs next year when predictive maintenance works.

Option two: response time. Equipment failures get fixed, but slower. Instead of two-hour response times, now it's four hours. Production halts a bit longer. Costs a bit more. But Northern Star accepts this temporary hit for the long-term gain.

Option three: don't build predictive maintenance. Keep all three engineers doing what they're doing. Accept that competitors who deployed predictive maintenance will maintain their cost advantage.

There's no fourth option where everyone works harder and does everything. That path leads to burnout and botched execution on all fronts.

The trap: Northern Star announces the AI project. They "assign" the engineer. But they don't actually stop anything. The engineer still has maintenance duties. Still responds to failures. Still attends the same meetings. Now they're also building AI in the margins.

Nine months later: the AI is half-built. Maintenance response times degraded anyway because the engineer was stretched too thin. Preventive maintenance optimization stopped happening. And the AI project failed because it never got the focus it needed.

Worse than doing nothing. They paid the cost of stopping things without actually stopping them. And they paid the cost of building AI without actually building it.

Clear trades look like this: "We're taking Sarah full-time off maintenance optimization for nine months to build predictive maintenance AI. During those nine months, preventive maintenance schedules will not be optimized. That's approximately $200K in higher maintenance costs we're accepting. After nine months, if predictive maintenance works, we save $800K annually. If it doesn't work, we wasted nine months and $200K and we restart maintenance optimization."

Unclear trades look like this: "We're prioritizing AI while maintaining operational excellence across all functions." Which means: we're doing everything, which means we're doing nothing well.

What you need to know: Organizations hate explicit tradeoffs. Saying "we're stopping X to do Y" feels like failure. So they avoid it. They say "we'll do both" and hope harder work bridges the gap. It never does. The only way to do something new is to stop doing something old. The only question is whether you choose what stops or let it happen randomly.

Budget is easy to name. Headcount is harder. Attention is nearly impossible. But attention is where most AI projects die.

Your VP of Operations has twelve priorities. You add AI as number thirteen. Guess which one gets their focus when there's a production crisis? The twelve things they were already managing. AI becomes the "we'll get to it next quarter" project. It never dies officially. It just fades from neglect.

If AI is priority thirteen, kill it now. Save everyone the slow death.

Real prioritization means something drops from the list. If AI matters, something else doesn't matter as much this year. Name it. The project that waits. The optimization that pauses. The initiative that gets shelved.

Northern Star's blast optimization project needs two geologists and a data scientist for twelve months. The geologists currently analyze drill samples to identify ore grades. That analysis helps blast engineers design optimal patterns. It's valuable work. It directly impacts ore recovery.

What dies? The geologists stop analyzing every drill sample in detail. They focus on high-value areas and use historical data for the rest. That means slightly less optimal blast patterns in some areas—maybe 1-2% less ore recovery in those zones. But once the AI works, the company gets 4-5% better recovery across all zones.

Short-term sacrifice for long-term gain. Explicit trade. Everyone knows what's being risked and what's being gained.

Or Northern Star could say "the geologists will do both." They'll analyze samples and build AI. Reality: they'll do neither well. Sample analysis slows down, blast patterns suffer, and the AI project drags on for eighteen months instead of twelve because no one has dedicated time.

The hardest question: what if the AI fails? Then Northern Star spent twelve months with suboptimal sample analysis and got nothing in return. They lost 1-2% ore recovery for a year and have no AI to show for it. That's the bet. That's what risk means.

Organizations that can't stomach that trade shouldn't build transformational AI. They should buy proven vendor solutions for catch-up projects. Less risk. Less upside. But they don't have the risk tolerance for bets that might fail.

Budget gets allocated in neat rows on spreadsheets. Money has clear owners. Someone controls that budget line and can redirect it. Uncomfortable, but clear.

Headcount is messier. People report to managers who have incentives to keep their teams intact. Taking someone full-time means their manager loses capacity. That manager's metrics suffer. Their team delivers less. Their performance review reflects it.

So managers fight it. "Sarah can contribute 20% to the AI project while maintaining her current responsibilities." Translation: Sarah will be spread too thin to succeed at either.

This is where executives matter. Someone senior enough to override local optimization. Someone who can say: "Sarah is off your team for nine months. Your metrics will be adjusted to reflect reduced capacity. We're not pretending you can do the same work with fewer people."

Without that air cover, the AI project gets whatever's left over. Which is never enough.

The remote sites project at Northern Star is the nobody-else-is-doing-this project. It needs two years and significant resources. It might work. It might not. If it works, Northern Star operates fifteen remote sites generating $15M annually that competitors can't touch. If it fails, they spent two years and got nothing.

What dies to fund it? Northern Star has been planning to expand their largest existing site. That expansion would add $8M in annual revenue. Proven approach. Known risks. Solid return.

They can't do both. Not enough engineering capacity. Not enough capital. Not enough executive attention.

Choice: Expand the proven site for $8M certain gain. Or build the remote sites capability for potential $15M gain that might not work.

This is strategy. Choosing what not to do so you can do something else. Most organizations say "we'll find a way to do both" and end up doing neither well.

The expansion gets delayed because resources keep getting pulled to the AI project. The AI project struggles because people keep getting pulled back to "urgent" expansion issues. Two years later, the expansion is half-done and over budget. The AI project is still in development with nothing operational.

Better decision: Pick one. Kill the other. If you choose the remote sites AI, the expansion waits two years. Accept it. Plan for it. Adjust expectations. If the AI works, you get $15M. If it fails, you start the expansion in year three.

If you choose the expansion, don't pretend to build the AI. Kill it cleanly. Take the $8M certain gain. Maybe revisit remote sites AI in three years when you have capacity.

Both are defensible. What's indefensible is pretending you can do both and failing at both.

What you need to know: The trap isn't making the wrong choice. The trap is refusing to choose. Organizations that won't kill anything don't build anything meaningful. They spread resources across thirteen priorities and wonder why none of them succeed.

Killing something doesn't mean it's bad. It means something else matters more right now. That's all. Maybe it matters more next year. Maybe it never matters enough. But this year, it's not the priority.

The projects that survive aren't the best projects. They're the projects someone was willing to kill something else to fund.

Most AI initiatives die from resource starvation dressed up as patience. "We're taking our time to do it right." Translation: nobody could agree what to stop, so the project gets scraps.

If you won't kill something to fund AI, don't build AI. You're not serious enough to succeed.

Dr. Strangebot loves organizations that refuse to choose. He feeds on divided attention and spread resources. He thrives in environments where everything is a priority, which means nothing is.

The question isn't whether your AI project is worth doing. The question is whether it's worth more than the thing you'd stop doing to fund it.

Answer honestly or don't start.

You've decided what to build and what to kill to fund it. Now the question gets harder: How do you prove it's worth it before you've built it?