AI won’t replace accounting platforms. It will make them more important
By Industry Contributor 24 March 2026 | Categories: news
By Aaron Harris, CTO, Sage
The market is spooked by AI. I understand the anxiety. But I think the conclusion many people are drawing is backwards: AI advances don’t make systems of record less relevant. They make them more important.
I’ve seen this movie before.
AI-native startups are raising billions with promises to rebuild everything from scratch. Some investors are now asking whether enterprise platforms face an existential threat.
When we started Intacct 25 years ago, we were convinced cloud accounting would flip the market in a year or two. Multi-tenant cloud accounting really was a step-change.
We were right about the direction. We were naive about the timeline.
What I underestimated wasn’t the technology. It was trust.
How hard it is to earn customer confidence. How long does it take for businesses to trust you with their financials? And how much patient, hands-on work is required to build an ecosystem of partners, developers, and accounting firms willing to commit their reputations and livelihoods to a platform.
Why trust is the hard part.
A short story.
A few months ago, a colleague in our strategy team (not a software developer) lit up a group chat after using an AI-powered tool to build a working general ledger prototype. The reaction was basically: “If it’s this easy, why won’t businesses just toss their accounting software and build their own?” Or: “Why use accounting software at all? Let AI just write everything to a database.”
I decided to repeat the experiment.
And I’ll admit it: I was impressed. In a couple of minutes, I had a working web app where I could enter a basic journal entry.
Then I looked under the hood.
There was no enforcement of the most fundamental rule in accounting: debits must equal credits.
That isn’t a “most of the time” rule. It isn’t a “close enough” rule. It’s an absolute. An accounting system that breaks is like a calculator that occasionally gives the wrong answer. Once that happens, there’s no partial trust. There’s just no trust.
Accounting professionals will immediately think of other invariants: the accounting equation, the trial balance, and cash movement reconciliations. Even when the rules are simple to state, making them consistently true across the messy reality of business is not simple at all.
Take something that sounds straightforward: net change in cash should equal ending cash minus beginning cash. Now layer in the reality: different reporting contexts, different classifications, different ledgers, different entities. Credits and debits don’t even behave “the same way” across statements. Enforcing rules can be easy; reporting them correctly, every time, under pressure, is hard.
And that’s before you get to what breaks most prototypes: multi-currency transactions, intercompany eliminations, consolidations across legal entities, jurisdiction-specific tax treatments, payroll rules, and the thousand edge cases that show up in real finance.
Now consider the cost of a mistake.
Finance leaders use their accounting data to apply for loans, report to investors, and comply with government regulations. Errors don’t just create rework. They can damage careers. In extreme cases, they can lead to legal consequences.
That’s why accounting platforms put as much emphasis on traceability and auditability as they do on functionality. These platforms are as much a system of evidence as they are a system of record. Everything is tracked. Changes are controlled. Approvals are explicit. Audit trails are non-negotiable.
And accuracy isn’t the only duty. There’s also privacy and security, with real-world consequences. Violations of GDPR in Europe or CCPA in California can result in serious penalties.
So when people ask, “Why is it so hard to build trust in finance software?” the answer is simple:
Because the consequences of breaking that trust are unacceptable.
Businesses have always needed to know: if something goes wrong with our financial data, who’s accountable? That question didn’t disappear with better technology. It intensified.
AI changes the work. It doesn’t change responsibility.
That’s why I’m not worried when I hear claims that AI will suddenly make accounting platforms obsolete.
The question isn’t whether AI will change how finance work gets done. It will.
The question is: who takes responsibility when AI makes a mistake?
And that’s exactly where accounting platforms become more valuable, not less.
This isn’t scepticism about AI. It’s a reality about finance. Because while agents are good at action, they’re bad at responsibility.
In accounting, payroll, and compliance, generating an output is the easy part. Owning the outcome is the hard part.
Finance isn’t a demo. It’s a chain of responsibility: who approved this, why it was done that way, whether it can be reproduced, and whether it stands up months later in a close, an audit, or a board meeting.
An agent can suggest.
A platform makes it official.
What “finance-grade AI” actually requires
I do believe AI delivers real value to accounting professionals. I’ve spent the better part of the last decade building it.
So for me, the real question isn’t “Will AI be used?”
It’s: How do you implement AI, including agentic AI, inside mission-critical workflows without weakening trust?
To do that, you need two things:
Domain-specific intelligence designed for accuracy and predictability
Not just “smart,” but reliable. Consistent. Repeatable. Explainable. Built to handle finance invariants and edge cases.
Deployment inside trusted systems that enforce accountability
Secure by design. Auditable by default. Governed by permissions and approvals. With clear boundaries on what AI can draft, what it can recommend, when it must ask for approval, and when it must escalate because confidence is low.
You design for safe failure. You assume the model can be wrong. You build verification and audit trails from the start. And you shift your objective from raw productivity to user confidence:
How confident is the user that transactions are captured and accounted for correctly?
How confident are they in the decisions they’re making based on that data?
And most importantly: how confident are they that they’re still in control?
In finance, “mostly right” is still wrong when it touches payroll, tax, or financial reporting.
As Sage, we’re not starting from scratch. We’ve built decades of finance and compliance intelligence into the platform. And we’ve deployed tens of thousands of domain-specific AI models across accounting, payroll, tax, payments, and compliance, designed not just to automate workflows, but to increase confidence in the accuracy and reliability of outcomes.
That’s the difference between a clever model and finance-grade intelligence.
Finance-grade AI has to be accurate and repeatable under pressure, with clear explanations when it matters, and auditability when it counts. No hand-waving. No “trust me.”
The future isn’t magic. It’s certain.
AI is moving fast, and market narratives will move with it.
But when it comes to finance, businesses don’t want magic. They want certainty.
AI will absolutely change how accounting work gets done. I believe that deeply.
But after more than 25 years of building systems people trust with their finances, one thing is clear: businesses don’t just need AI that works. They need AI they can defend, with accountability when things go wrong.
You can’t sue an algorithm. You can’t hold an LLM liable.
But you can build AI inside platforms that were designed from day one to enforce controls, preserve evidence, and assign responsibility.
That’s not a limitation of AI.
It’s a design requirement.
And it’s exactly why accounting platforms won’t be replaced by AI. They’ll become the place where AI becomes safe enough, accountable enough, and trustworthy enough to run a business.
Most Read Articles

Have Your Say
What new tech or developments are you most anticipating this year?

