Every FP&A team is somewhere on the AI adoption curve right now. Some are deep into it. Most are experimenting. A few are still watching from the sideline and wondering where to start.
The question I get most often isn't "should we use AI?" That's settled. It's "how do we think about this in a way that actually produces results?"
Here's how I'd frame it.
Start with leadership
AI adoption in FP&A doesn't happen organically. Someone has to own it, set a direction, and create the conditions for the team to experiment without fear of looking incompetent. That's a leadership job before it's anything else.
This means being explicit about what you're trying to accomplish. Faster close cycles, better variance narratives, more time on analysis and less on production — whatever the goal is, name it. Vague "use AI more" mandates produce a lot of ChatGPT experiments that never make it into actual workflows.
Once leadership is aligned on the goal, the practical work breaks into four areas: people, process, data, and tools.
People
The skills that matter most for AI adoption in finance aren't the ones you'd expect. Prompt engineering matters, but it's learnable in a week. What's harder to teach is the ability to critically evaluate AI output — to read a model, a narrative, or an analysis and know whether it's right.
This is a different skill from building things. A lot of finance professionals built their careers on production speed. AI changes what production speed requires. The team members who adapt fastest are usually the ones with strong fundamentals, not the ones who are just technically curious.
Training investment should go toward review capability as much as generation capability. Both matter, and most teams underinvest in the former.
Process
The highest-value AI applications in FP&A are almost always workflow improvements, not one-off tasks. That means identifying the processes that are currently painful, time-consuming, or inconsistent, and redesigning them with AI as part of the workflow from the start.
Variance commentary is the most common entry point. The inputs are structured, the output is reviewable, and the time savings are immediate. Build a repeatable prompt, run it for a quarter, and measure whether the output quality holds up. That one process, done well, teaches you more than any amount of general experimentation.
The governance piece lives here too. Be explicit about which outputs require a human review step before they go anywhere. Board packages, investor reporting, external disclosures — those need a senior set of eyes regardless of how the draft was produced. Build the habit before you need the policy.
Data
AI is only as good as the inputs you give it. This is where a lot of teams hit a wall. The model or the prompt is fine, but the underlying data is inconsistent, incomplete, or structured in a way that makes it hard to work with.
Before investing heavily in AI tooling, it's worth asking whether your data is clean and accessible enough to support it. Chart of accounts consistency, headcount data that matches across systems, actuals that reconcile cleanly — these aren't AI problems. They're data problems that AI will make more visible.
Tools
The tool selection question has two layers. The first is general-purpose AI — Claude, ChatGPT, and similar — which can handle a wide range of FP&A tasks from narrative drafting to model logic to code. The barrier to entry is low and the use cases are broad. Most teams should start here.
The second layer is purpose-built FP&A platforms with AI capabilities: Aleph, DataRails, Pigment, and others. These work with financial data structures they were designed for, which tends to make the output more reliable. The tradeoff is implementation overhead and cost. They make more sense once you know what you actually need, which is hard to know before you've built some fluency with general-purpose tools first.