The honest conversation about AI in finance is harder to find than the promotional one. Most of what gets written is either vendor enthusiasm or reflexive skepticism from people who haven't actually used the tools. Here's a more grounded take on where the real limits are.
More output, but more required review
The most underappreciated limitation of AI in FP&A is that it changes the skills required to do well in the job. Output volume goes up significantly when you introduce AI into the workflow. That means the ability to critically evaluate what's in front of you — to read a model and know whether it's right, to read commentary and know whether it actually explains anything — becomes more important, not less.
This is a different skill from production. A lot of finance professionals built their careers on being fast and accurate at building things. AI is now faster at building. The question is whether the same people are equally good at reviewing, and the honest answer is that many aren't.
Cognitive surrender is a real risk
Researchers have a name for what happens when people stop critically reviewing AI output: cognitive surrender. The tendency to accept what's generated without applying the skepticism you'd apply to your own work. It shows up in finance when someone publishes a variance commentary that restates numbers without explaining them, or when a model assumption goes unexamined because the AI filled it in.
The risk is highest for less experienced practitioners who don't have a strong enough baseline to know when something is wrong. AI can produce output that looks right without being right, and catching the difference requires knowing what right looks like.
You can get over your skis faster now
This connects directly to the experience question. A junior analyst using AI tools can now produce output that looks like senior work. The problem is that looking like senior work and being senior work are not the same thing. The judgment layer — knowing which drivers matter, knowing when an assumption is reasonable, knowing what the board actually needs to understand — that doesn't come from the tool.
The risk isn't that AI replaces experienced finance people. It's that it creates an illusion of capability that obscures where the gaps are.
The human skills haven't changed
Every capability that mattered before AI matters just as much now. Understanding the business, communicating clearly to non-finance audiences, knowing which questions to ask when the numbers don't make sense. AI handles more of the production work. It handles none of the thinking.
The finance professionals who will do best with these tools are the ones who treat AI as a production accelerator and keep the judgment work for themselves. The ones who will struggle are the ones who try to outsource the judgment.
What this means practically
If you're introducing AI tools into your team, the most important investment you can make is in review capability, not generation capability. Training people to evaluate AI output critically is more valuable than training them to prompt better.
And if you're a practitioner building your own skills, the fundamentals matter more now, not less. The floor for what AI can produce is rising. The ceiling for what experienced judgment can catch is what separates good finance work from output that just looks like it.