But before we roll out the red carpet for our silicon-based overlords, perhaps it’s time to take a step back, breathe, and ask: Is AI really the answer to all of our public sector challenges, or just the latest glittery gadget in a long line of digital distractions?
Because while AI can crunch numbers faster than an overly caffeinated finance officer at year-end, it’s not quite ready to write local housing policy, or to explain said policy to a grumpy councillor who missed the meeting and thinks that machine learning is a new type of parking meter.
There’s no denying AI can be a powerful tool; after all, chatbots can answer FAQs faster than you can say “planning application,” while predictive analytics can help identify service demand spikes, and machine learning algorithms can sniff out fraud faster than even the most cynical audit officer.
In fact, it’s not hard to see why so many government workers flagged AI and machine learning as top priorities in our research. Understaffed, overworked, and under constant pressure to do more with less, the idea of a 24/7 assistant that doesn’t need coffee breaks or a pension plan — a digital colleague that comes with no emotional baggage — is understandably appealing.
But — and this is a big but — while AI can absolutely augment what we do, it doesn’t (and can’t) replace the need for human judgement, empathy, or have the facility to mediate between two residents arguing over the last plot in their local allotment.
These minutiae of public life — messy, emotional, and non-algorithmic as they might be — matter. Which is precisely why we can’t hand over the keys to the civic kingdom just yet.
Accuracy: The achilles heel of AI
Let’s talk about one of the least sexy (but most vital) parts of government work: accuracy. Whether it’s processing benefits claims, handling electoral rolls, or communicating bin collection schedules (arguably the most scrutinised council service of all time), one small error can have real-life, reputationally disastrous consequences.
And here’s where things get awkward. While AI tools like large language models (hello CoPilot) are brilliant at generating plausible-sounding content, they’re not always factually correct. In fact, much like a cat who’s sniffed too much catnip — but less cute and far more alarming — AI can even “hallucinate” information.
For example, imagine relying on a chatbot to draft your climate strategy only to find that it’s cited a study from the University of Atlantis. Or worse, you use it to confidently recommend a parking policy that was trialled only on the mean streets of “SimCity 4.”
Civil servants live and breathe trust, transparency, and accountability, and AI — for all its slick outputs — doesn’t yet pass the credibility test needed for life in local and central government.
Context is king
One of AI’s biggest challenges is that it doesn’t really understand context. Sure, it can take in vast amounts of information and spit out seemingly coherent answers. But does it understand that your council has a history of public consultation fatigue thanks to a controversial library closure a decade ago? No. Will it pick up on the delicate political dynamics between the elected member for parks and the councillor obsessed with turning the entire town centre into a mobility hub? Doubtful.
AI doesn’t live in your community and — more to the point — doesn’t have first-hand knowledge of your residents and their concerns. After all, it hasn’t sat in a freezing church hall trying to explain the local plan to 12 people and a dog. Likewise, AI doesn’t know that your residents are suspicious of any initiative that uses more than three acronyms, and it certainly hasn’t been asked to “just do a quick explainer video” on adult social care strategy with Comic Sans and absolutely no budget.
As a human being, your understanding of the people and place you serve comes from a very specific context, one that is rooted in a deep sense of emotional intelligence and an intimate knowledge of local politics. No AI model — however well-trained — will have your depth of knowledge or feeling for the community you serve.
Transparency: The algorithmic black box
Decision-making in local government can be a lengthy process, one that involves a multitude of factors. For the sake of this piece, let’s say that AI — in the context of local government — generates an outcome to allocate resources based on “smart” data inputs. Consider this: Who’s responsible if that decision is wrong? And more importantly, if queried, how can that decision be explained?
Public sector workers are held to a high standard of transparency. Every policy must have a paper trail, every consultation must be minuted, and every mistake must be owned. AI doesn’t do “clear rationale.” Many of the algorithms used in machine learning are opaque, even to their creators. This means when the system generates an outcome, it’s often impossible to unpack exactly how or why that outcome was made.
Imagine this conversation between a resident and a council officer concerning house allocation:
Resident: “Why did I get moved down the housing list?”
Council Officer: “Well, we have an AI algorithm that does the positioning.”
Resident: “Can you explain that?”
Council Officer: “Er… no. But it seemed very confident.”
That doesn’t exactly scream “robust democratic accountability,” does it?
The not-so-human touch of AI
The other problem with AI is its lack of empathy. As a human, you know it when you feel it; it’s what guides you to create a sensitively worded council tax letter to a recently bereaved resident, what drives you to pick up the phone to call a vulnerable member of your community rather than email them, and the awareness that your local community centre — which serves as a lifeline for isolated older people — deserves so much more than the cold logic of an algorithm.
Technology can optimise and accelerate, but it cannot connect emotionally. It cannot feel the undercurrent of a public meeting or adjust its tone depending on whether someone’s arrived at your town hall furious or just confused. Human-centred design is more than a buzzword — it’s a lifeline for public trust.
Digital transformation in government should never be about cold efficiency at the cost of public service. It should be about using tools like AI to enhance our human capacity to serve — not eliminate it.
Strategy vs delivery: Bridging the AI chasm
One of the most revealing findings in the Granicus whitepaper was that while AI and data are being prioritised at the top, the real challenge lies in bridging the gap between strategy and delivery. This is where things often fall apart — not because the tech isn’t clever enough, but because change is messy, political, and deeply human.
Implementing AI in public services isn’t just about buying the latest tool — it’s about aligning teams, adapting processes, training staff, and bringing people on the journey. It’s about communicating clearly, often, and empathetically. No tool can do that without human guidance.
Without a strong communications function and visible leadership, even the best AI projects risk becoming just another expensive pilot programme gathering digital dust. The smartest councils aren’t just deploying AI — they’re using it to empower people, not replace them.
So … should we just ditch AI?
Absolutely not. AI has incredible potential. From flagging potholes using image recognition to speeding up benefit processing and even translating public service announcements into multiple languages on the fly — it can be a game changer.
But it’s a tool, not a strategy. And certainly not a miracle cure.
The best outcomes in local and central government will come from a hybrid approach, one where humans lead and AI supports. In this scenario, data is used to inform, not dictate and to design services around people, not processes.
And let’s be honest — if AI ever does become good enough to run the council, it’ll probably take one look at the inbox, realise half the emails are marked “urgent,” and quit by lunchtime.
Final thoughts: Keep it real, keep it human
So, the next time someone suggests that AI can fix all of your department’s woes, smile politely and ask whether the chatbot can also negotiate with unions, wrangle councillor egos, and soothe a planning officer who’s just been shouted at in a Tesco car park.
Technology is here to help, not to take over. The real transformation isn’t in replacing people with machines — it’s in freeing up your teams to do more of what humans do best: building trust, showing empathy, and designing services that truly work for real people.
Until AI can master that, we say: nice try, robot. But the tea’s on us.