When Your AI Goes Down: Building Resilient Finance Operations
Claude was down last week.
Not my internet. Not my laptop. The model itself was unavailable. Anthropic had an outage and for several hours I sat staring at a terminal that kept timing out, refreshing a status page, and slowly taking stock of everything I had planned to do that morning.
It wasn't a disaster. But it was a mirror.
How Deep the Dependency Actually Goes
I didn't realize how much of my daily workflow runs through a single AI provider until that provider went offline.
Territory allocation and case distribution analysis? I've built that in Claude Code. Monthly variance commentary drafts? Claude. Cash flow scenario runs when I need a quick sense check? Claude. Email drafts when it's 9pm and I've used up my best thinking for the day? Claude. Data pipeline debugging, policy document updates, board slide outlines, FP&A model scaffolding. All of it flows through the same tool, the same API, the same infrastructure.
I've written before about being roughly 80% vibe working. That framing made it sound like a choice. This outage made it feel like a dependency. There's a difference.
A few years ago I would have had 12 browser tabs open, two Excel files running, and a Slack thread moving in parallel. That was the workflow. AI didn't replace it so much as it compressed it. Now I have fewer open loops because one tool handles more of the execution layer. That's genuinely more efficient. It's also genuinely more fragile.
When the tool came back online that afternoon, my first thought was relief. My second thought was: I should have a backup plan.
The Single Point of Failure Problem
Every finance professional understands concentration risk. We model it constantly. Customer concentration, vendor concentration, counterparty exposure. We know what happens when too much depends on a single node in the system.
We just don't apply that thinking to our own workflows.
Running 80% of your execution through one AI provider is, by any reasonable risk framework, a concentration problem. Not a catastrophic one. The downtime was hours, not days. But the principle holds: any workflow where a single provider going offline stops your work has a structural vulnerability.
This is distinct from the normal tool risk most finance teams carry. If Excel goes down, you lose Excel. But Excel is local. Your files are local. The functionality doesn't disappear when Microsoft has a bad day. AI tools are different. The capability lives in someone else's data center. The API is infrastructure you don't control, can't inspect, and can't patch.
Cloud concentration risk is not new. Every CFO knows that AWS going down takes half the internet with it. But we've internalized that risk because it's infrastructure we can point to, monitor, and usually route around. AI provider risk feels softer because the tool feels personal, like a smart colleague rather than a server rack. That's exactly why we underestimate it.
What I Actually Did for the Rest of That Day
The honest answer is: I worked. More slowly. More deliberately. And with more friction than I expected.
The first thing I noticed is that some tasks just weren't possible without a backup option. I had three deliverables queued that depended on AI-assisted drafting. I didn't have manual fallbacks documented. I knew roughly what each one needed, but I had to reconstruct the process from memory rather than from any playbook.
The second thing I noticed is that I'd let some manual muscle memory atrophy. Running a scenario analysis that used to take me ninety minutes in Excel now takes longer, because I've been offloading the model setup to AI and jumping straight to reviewing the output. When the AI wasn't there, I had to rebuild the logic from scratch. I could do it. But it was slower than it should have been.
The third thing I noticed was actually encouraging. For the judgment-heavy work, I was fine. Writing a board narrative from a blank page, reasoning through a client's pricing model, talking through a hiring decision with a colleague. None of that needed AI. And working through those problems manually, without the shortcut of asking Claude to draft a first cut, produced thinking that felt sharper in a few places. I'll get back to that.
By mid-afternoon I had a list. Not a to-do list. A gap list. The workflows where I had no fallback, the processes I'd never written down, the tools I'd stopped using because AI had made them redundant, and that I hadn't verified I could still use if I needed to. It was longer than I wanted it to be.
The Insight I Didn't Expect
When Claude came back online, I used it differently for the rest of that week.
This is the part that surprised me. The outage wasn't just a resilience problem. It was a diagnostic.
I'd been using AI for two categories of work without distinguishing between them. Execution tasks, where the work is structured, repeatable, and doesn't require judgment. And thinking tasks, where I'm wrestling with ambiguity, making trade-offs, or forming an opinion. AI is genuinely excellent at the first category. For the second category, I'd been outsourcing more than I realized.
When you ask AI to draft the narrative for a variance analysis, you skip the step where you sit with the numbers and form a view. The draft comes back and it's good enough, so you edit it and move on. But the editing is a different cognitive act than the forming. You're reacting, not originating. Over time that's a subtle but real erosion of the thinking habit.
The day without AI forced me to originate again. And a few of those moments produced cleaner thinking than I'd had in a while, because there was no scaffold to react to.
The lesson isn't that AI makes you worse at your job. It's that AI for execution is different from AI for judgment, and conflating the two quietly degrades the judgment side. Now I'm more deliberate about which mode I'm in. AI handles execution. I handle the judgment calls myself and use AI to stress-test them after, not before.
Why Finance Is Different
Every department has AI dependency risk. Marketing, legal, operations. But the stakes aren't equal.
Finance is the function where accuracy isn't a quality bar, it's a requirement. An AI-drafted marketing email that misses the mark gets revised. An AI-generated variance explanation that misidentifies the driver makes its way into a board package and gets acted on. The downstream consequences are fundamentally different.
Finance also carries compliance exposure. Audit trails, data handling requirements, regulatory reporting. If your AI tool goes down and you can't reconstruct your process manually, you have a documentation problem that outlasts the outage. Auditors don't accept "the model was unavailable" as a footnote.
There's also the institutional credibility angle. CFOs are paid for their judgment. The moment your team or your board senses that the judgment is being outsourced to a tool, the credibility calculus changes. That's true even when the AI output is good. The perception of dependency has its own risk profile separate from the actual quality of the work.
This is why finance leaders need to think about AI resilience differently than their peers in other functions. The tolerance for error is lower. The compliance requirements are real. And the professional credibility attached to the work is harder to rebuild if something goes wrong.
A Practical Resilience Framework
After the outage I spent a few hours building what I should have had before. Four components.
First, redundancy across providers. I now use at least two AI tools for core workflows. Claude remains my primary. I've set up parallel capability in one other system for the tasks I run most frequently. Not because one is better, but because if one goes down I have somewhere to route immediately without rebuilding from scratch.
Second, manual fallback documentation. For every AI-assisted workflow that touches external deliverables or financial reporting, I now have a one-page document describing how to do it without AI. Not a full manual. Just enough that a capable person could execute it in a reasonable timeframe. Think of it as a business continuity plan at the workflow level rather than the business level.
Third, a quarterly fire drill. One day every quarter where I deliberately don't use AI for anything. Not as a punishment, and not because manual is better. As a maintenance exercise. The same reason pilots do sim hours. You need the manual skill to stay functional, or you'll discover its absence at the worst possible time.
Fourth, a clear distinction between AI-appropriate and AI-inappropriate tasks. I keep a short running list. Execution tasks (formatting, scenario generation, first drafts, data structuring) are AI-appropriate. Judgment tasks (audit conclusions, board narrative formation, hiring decisions, client strategy) are AI-inappropriate for first-pass work. I'll use AI to review and pressure-test the judgment, but not to produce it.
None of this took more than a few hours to build. The documentation doesn't need to be beautiful. It needs to exist.
The Right Relationship With Dependency
I'm not suggesting you reduce your AI usage. The productivity case is too strong and the competitive pressure is too real. Finance leaders who aren't building AI into their workflows right now are falling behind at a rate that won't be obvious until it is. I've written about that gap before and I still believe it.
What I'm suggesting is that dependency without resilience is a risk management failure. And finance people, of all people, should know how to manage that.
You buy insurance not because you expect the house to burn down, but because concentration of a single asset in a single location with no fallback is a structural problem. The same logic applies here. Build the AI workflows. Run them hard. And also document the manual backup, maintain the skill, and keep a second option available.
The question isn't whether your AI will go down. It will. All infrastructure goes down. The question is whether you've designed your operations to stay functional when it does.
Embrace the dependency. Just build the safety net first.
If you want to start this week: identify your three most AI-dependent workflows, write a one-page manual fallback for each, set up a second AI tool for at least one of them, and put a quarterly "no AI day" on your calendar before you close this tab.
P.S. After that day without Claude, I went back through six months of my board commentary and variance reports. The ones I'd written before heavy AI adoption read differently from the ones I'd written after. More opinionated. More specific. A little rougher. I'm not romanticizing the old way. But I did adjust how I use the new one.
Want to talk about your finance function?
I spend 30 minutes with CFOs and finance leaders every week discussing how AI fits into their operations. No pitch, just a conversation.
Book a 30-Minute Conversationor email us at hello@strategiq.so