
Before using beanies for automation, reconciling our account often took my entire 30 min morning call. Now it takes just three minutes."
Coral Tolley-Fletcher
JVCA

The numbers are hard to ignore. AI adoption among accounting firms quadrupled last year, and half your clients are already making plans to reduce how much they rely on human input. Most firms know something is shifting, they're just not sure what to do about it yet, that's exactly what this session is for.
Key Takeaways:
1️⃣ - The profession hasn't adopted AI, it's experimented. Adoption is the wrong word.
2️⃣ - Most firms setting AI targets are setting the wrong ones. An AI toolset and an AI strategy are not the same thing. (RPA in 2015 made this mistake.)
3️⃣ - The profession isn't ready to give AI agency yet, and in some areas may never be, with good reason.
4️⃣ - Clients are moving on AI faster than their accountants are. The uncomfortable question is landing on more desks every month.
5️⃣ - You don't need AI to automate most things in your firm. A lot of "AI transformation" is just good automation wearing a fashionable label.
Yes, absolutely, and the sandbox isn't optional, it's the foundation. The methodology I'd recommend has three phases that map closely to how I've deployed enterprise automation in regulated environments for over a decade.
The first is foundation: ideally, get your processes documented and stable before you automate anything. Henry's comment in the chat hit this exactly, if your SOPs are shaky, your AI implementation will be shaky. AI doesn't fix bad process, it accelerates it, in both directions. However, in the real world, solidifying processes and automating them, often happens in parallel as part of a single exercise. This can be faster and more effective, so don’t think you have to spend months getting all your processes right first, as AI can be an incredible enabler for this step too.
The second is sandbox and supervised prototype: build in an isolated environment, run it in parallel with the manual process for long enough to see the long tail of edge cases, not just the happy path that’s easy to handle. Karl's example in the chat is exactly why this matters. A 90% working agent isn't 90% useful, it's potentially worse than no agent, because the failures are silent and the consequences are real. This is where your agents specialisation, personalisation and context comes in.
The third is staged production rollout with measurable guardrails: clear success criteria, human-in-the-loop on judgement calls, full audit logging, and an explicit rollback plan for when something goes wrong. Not if, when.
The mistake firms make is collapsing these three into one, going from demo to production overnight, then reverse-engineering the governance after the first incident. It costs more, both financially and reputationally, than doing it properly the first time. It can also be harder to untrain and retrain your agent from wrong behaviour.
Three reasons, and they compound.
The first is the demo problem. AI demos are extraordinary, they're built to look effortless, and they show the tool at its best on a curated example. People watch a demo and assume that's what Tuesday morning will look like. It isn't. Tuesday morning is the long tail of edge cases, and that's where the time investment goes.
The second is the language we use. We talk about "deploying" AI like it's installing a printer, push of a button, as Karl's client put it. The reality is closer to onboarding a new team member, which is exactly the framing Fozia used in the chat. You wouldn't expect a new junior to be productive on day one. You'd expect to invest three to six months in their development before they're contributing at full capacity. AI deserves the same patience, and for the same reasons, it needs context, it needs feedback, it needs trust to be earned through observed performance.
The third is that the cost of getting it wrong has historically been hidden. With manual processes, errors are absorbed into rework and nobody calls them errors. With AI, the errors are visible, which feels riskier even when it isn't, and the temptation is to abandon the tool rather than to give it the time and feedback it needs to improve. The firms that succeed are the ones who treat the first six months as investment, not evaluation.
Yes, and this is one of the most genuinely optimistic angles on AI in the profession, and I don't think it gets enough airtime.
Used well, AI is a thinking partner. It surfaces options you hadn't considered, challenges assumptions, and lets you explore scenarios faster than you could on your own. That changes the nature of professional work in a useful way: less time assembling the answer, more time interrogating it. Less time on what the numbers say, more time on how to think about what they mean.
The grey-area point is the one I'd push hardest on, because that's where professional judgement actually lives. Compliance has a right answer; advisory work doesn't. Most of the value an accountant brings to a business owner sits in the ambiguity, the should we, the what if, the what would happen if we changed this. AI is genuinely good at exploring those spaces, presenting tradeoffs, and stress-testing decisions. It doesn't replace the judgement, but it expands the surface area you can apply judgement to.
The caveat: this only works if the accountant is genuinely engaged with the AI's output, not deferring to it. The risk isn't that AI starts thinking for us, it's that we stop thinking with it and start outsourcing the thinking entirely. Used as a partner, it sharpens you. Used as an oracle, it dulls you. The discipline is choosing the first mode every time.
Honestly, partly yes, partly no, and the distinction matters a lot.
Yes, in the sense that no firm should have to become an AI research department to do their job well. The vendors you trust with your core stack should be doing the heavy lifting on integration, security, compliance, and keeping pace with the underlying models. That's exactly what you're paying them for.
But, and this is the bit I'd push back on, not in the sense of waiting for the incumbent ledger and practice management vendors to bring AI to you. Their commercial incentive isn't to disrupt their own products with genuine AI capability; it's to add AI-flavoured features that protect the existing model. The firms waiting for their software vendor to "catch up" are usually waiting in the wrong queue. AI models deployed by your vendor are also designed to be mostly effective at the overall enterprise entity level, rather than at your firms level where personalisation and specialisation is driven by your staff and your clients.
The healthier middle position is this: rely on a small number of trusted vendors who are AI-native, not AI-retrofitted, and let them do the integration and currency work for you. You shouldn't need to track every model release; you should need to track which two or three vendors are genuinely on top of it, and trust them to keep you current. That's a smaller, more manageable problem. I’ve seen some genuinely astounding results by comparing the outcomes from various different models on the same scenarios, the results of which would probably have most accountants running for the hills and as far away from AI as possible, but this is an essential step for any vendor trying to develop the best solutions for the profession.
Where firms get this wrong is when they try to keep up with everything themselves, every product launch, every announcement, every new tool. That's exhausting, expensive, and unnecessary. Pick the vendors whose business model is built on staying current and setting clear expectations for users. Let them carry that load.
My honest answer is: in some areas yes, in others no, and the distinction is healthy.
Karl's framing in the chat is the right one, for a lot of accounting work, the goal isn't full agency, it's augmentation. A human-in-the-loop model where AI does the heavy lifting and the accountant retains judgement is not a stepping stone to full autonomy, it's the destination, for any work where professional responsibility, client trust, or regulatory accountability sits with a named human.
Where full agency makes sense is in the high-volume, low-judgement work, bookkeeping reconciliations, transaction categorisation, basic compliance flows, where the cost of an error is contained and the audit trail is robust. Where it doesn't make sense, and may never, is in client-facing advisory, in judgement calls on materiality, in any conversation where the human relationship is the value being delivered. Deborah's point about the human connection is exactly right, and it's not nostalgic, it's commercially correct. Clients pay for trust, and trust is hard to delegate.
So the right model isn't "will we go fully agentic" but "where in the practice does agency belong, and where doesn't it?" The firms that answer that question deliberately and curate their AI Strategy will end up with the right balance. The firms that try to go all-or-nothing in either direction will get it wrong. Don’t feel that it’s all or nothing with AI. Your clients will expect you to use AI, and they’ll start to perceive the value of your services differently too. So show them that you’re using AI with clear, well-defined, well-intended purpose and your incorporating the firms collective years of experience and expertise to provide a hyper-personalised, specialised, and irreplaceable AI-enabled service.

Before using beanies for automation, reconciling our account often took my entire 30 min morning call. Now it takes just three minutes."
Coral Tolley-Fletcher
JVCA

A team member was off for a week and the bot picked up all their work no problems at all. They pretty much walked straight back into work without a lag in catch up time!
David Adderson
Co-Founder at YouTopia

beanies completely transformed our firm. We're able to rapidly scale and increase our offerings too!
Chris McKenna
TC Group
The heading and subheading tells us what you're offering, and the form heading closes the deal. Over here you can explain why your offer is so great it's worth filling out a form for.
Remember:
The future of Accounting isn't software, it's outcomes!
Design and build the smart AI and automation infrastructure that takes your accounting firm to the next level.
