AI Finance

AI for Finance: What I Tried, What Worked, What I'd Skip

Saul Mateos

89% of Finance Execs Plan to Upskill for AI. Here's What Happens When You Actually Do It.

That stat comes from EY's latest survey. Planning. Not doing. I've been doing it for two years now, and the gap between "plan to upskill" and "actually changed how we work" is enormous.

Here's an honest scorecard. Six areas I tested, what stuck, and what I quietly stopped doing.

1. Parallel Working

This was the single biggest unlock, and it has nothing to do with finance specifically.

I run multiple AI sessions simultaneously across different workstreams: finance reporting, operational analysis, internal tools. One finishes a task, I review it. While I'm reviewing, the others are building. When I send feedback on the first, I rotate to the next.

A set of tasks that would take 2-3 hours single-threaded takes 15-20 minutes when parallelized. The bottleneck was never the AI. It was me working sequentially.

Fair warning: context switching has a cost. You need enough familiarity with each project to review output quickly, or you'll introduce errors trying to move too fast. Start with two parallel sessions, not eight. Keep a log of what each session is doing, because when you come back to a finished task, you need to remember what you asked for.

2. Financial Modeling

This landed in a good spot, but not where I expected.

I initially thought AI would build complete financial models from scratch. It can't. Not well, anyway. Financial models are too nuanced, too dependent on institutional knowledge and specific business logic.

Where AI excels: scaffolding. Give it your driver assumptions and it'll build the structure of a three-statement model in minutes. Then you refine. AI gets you to 80% fast. The last 20% is where your judgment matters: which assumptions to stress, where the logic needs business-specific adjustments, how the sensitivities actually play out.

The workflow that stuck: AI for the initial build and formula architecture, then manual refinement for the parts that require context only you have. For updates and scenario runs, AI is genuinely fast. Changing five assumptions and seeing the downstream impact in seconds rather than minutes adds up over a month of iterations.

One gotcha I learned the hard way: version control. AI-generated updates will overwrite manual changes if you're not careful. You have to be explicit about preserving edits you've made outside the AI workflow.

3. Reporting and Dashboards

This is where AI delivers the most visible ROI, and honestly where I think most finance teams should start.

I've built dashboards that replaced manual processes taking someone half a day every week. The first one took me a couple of hours to set up. Now I can spin up a new reporting view in under 30 minutes because I've built up a library of patterns.

The results speak for themselves: automated data pulls, real-time visualizations, drill-down capability. Things that would've required a BI team or expensive platform licenses.

Two things I learned the hard way. First, your data and calculations need to live in a proper backend, not embedded in the dashboard itself. AI usually handles this correctly, but you need to understand the concept or you'll end up with something fragile that breaks when the data changes shape. Second, security. When you're building dashboards with real financial data, authentication and access controls aren't optional. Don't skip this step, even for internal tools.

4. Writing and Communication

This one's more nuanced than people think.

AI can draft board commentary, executive summaries, investor updates. The first draft quality is solid. It saves time. But here's the problem: AI writing has a smell. Anyone who reads enough of it can spot the patterns. The filler phrases, the overly structured paragraphs, the relentless positivity.

If your board or investors start suspecting that your CFO commentary was AI-generated, you have a credibility problem. Not because using AI is wrong, but because careless AI writing signals careless thinking. People wonder: if you let AI write the narrative, did you also let it do the analysis?

The solution: use AI for the first draft, then rewrite it in your voice. Remove the filler. Add the specific details and opinions that make it sound like a human who actually knows the business wrote it. This takes discipline, because the AI draft looks "good enough" and it's tempting to just send it.

5. Team Adoption

Using AI yourself is one thing. Getting a team to adopt it is something else entirely.

I've trained multiple teams on AI tools. Here's the honest math: out of any group, about 30% will actually change how they work. The other 70% will attend the training, nod along, and go back to their spreadsheets.

That sounds like failure, but it's not. Those 30% who adopt? They produce 3x output. The math works even at partial adoption.

What doesn't work: company-wide AI initiatives, mandatory training sessions, sending everyone a list of "top 10 AI tools." Nobody changes behavior from a webinar.

What works: finding one person's specific pain point, building an automation for that exact problem, and letting them become the evangelist. Start with one person's real problem, not a corporate initiative. The success stories spread on their own.

6. What I'd Skip

AI-powered general purpose chatbots for finance questions. These sound amazing in demos. "Ask your data anything!" In practice, they hallucinate numbers, misunderstand accounting concepts, and create more work verifying their answers than just doing the analysis yourself. Maybe in a few years. Not today.

Automated email drafting without heavy review. I mentioned the AI smell problem. For routine internal emails, fine. For anything client-facing or board-facing, the review process takes long enough that the time savings are marginal.

Trying to automate judgment calls. AI can tell you that revenue is trending 12% below forecast. It can't tell you whether that's because of a temporary market dip or a fundamental problem with your go-to-market strategy. Stop trying to make it do this. Use it for the analysis, keep the judgment human.

The Path I'd Recommend

If I were starting over, here's the order I'd follow:

Month 1: Pick one reporting workflow and automate it with AI. Something that takes hours and happens every month. Variance analysis is the obvious choice.

Month 2: Build your first dashboard. Replace one manual report with a real-time view. Let the team see what's possible.

Month 3: Start using AI for financial model scaffolding. Not replacing your models, just accelerating the build process.

Month 4+: Identify your 2-3 team members who are genuinely curious about AI. Invest in them. Build their workflows. Let them pull the rest of the team along.

The Honest Summary

AI saves real time on specific, repeatable tasks. It's not good at anything requiring judgment, relationships, or the context your team carries in their heads.

None of my wins came from buying a tool or attending a conference. Every single one followed the same path: I did the task manually first, documented the repeatable steps, and then automated the mechanical parts.

Stop collecting AI tool lists. Start with one workflow that's eating your team's time and automate that. One. Not five. Not a roadmap. One workflow, this week.

Want to talk about your finance function?

I spend 30 minutes with CFOs and finance leaders every week discussing how AI fits into their operations. No pitch, just a conversation.

Book a 30-Minute Conversation

or email us at hello@strategiq.so

More from Insights