In late September, the BBC reported that a UK MP became the first to “create himself as an AI bot.” The project was presented as an experiment in accessibility — an automated avatar able to field constituents’ questions and explain policy positions around the clock.
While the technical novelty drew attention, the symbolism was unmistakable: an elected official offering a digital proxy to interact with voters. The move reflects growing curiosity among politicians about how AI can extend their presence, streamline communication, and demonstrate modernity.
Yet it also raises a question that no algorithm can yet answer — whether the authenticity of representation can survive automation. Constituents may welcome instant replies, but representation depends on accountability and empathy — qualities difficult to encode. What happens if a voter feels misled by an AI MP’s response? Who is responsible: the code, the developer, or the human behind it?
If the virtual MP suggested a future of convenience, events in Northern Ireland revealed the opposite — suspicion and political theatre.
In mid-September, BBC News NI reported that Education Minister Paul Givan was accused by opposition leader Matthew O’Toole of using AI to draft a speech on special educational needs (SEN) funding. Givan dismissed the allegation as a “cheap shot” and “utterly shameful,” insisting his remarks were his own. The Department of Education later confirmed the speech “wasn’t written by AI.”
The exchange exposed how politically charged AI’s use has already become. The accusation implied laziness or deception — that a minister might outsource not only drafting but judgement itself. For O’Toole, invoking AI was a rhetorical device; for Givan, a reputational threat.
The episode underscored a new accountability gap. In earlier eras, plagiarism or spin drew rebuke; now, even the possibility of algorithmic authorship can provoke outrage. The legitimacy of speechmaking — a core ritual of democratic responsibility — suddenly hinges on proof of human origin.
A few days earlier, another BBC report described Albania’s creation of “Diella”, an AI-powered minister for public procurement. Unlike the Northern Irish controversy, this initiative was embraced by Albania’s Prime Minister Edi Rama as a bold anti-corruption measure.
Diella — whose name means sun — is not human, nor constitutionally recognised, but serves as a virtual overseer of public tenders. Rama’s goal: make procurement “100% free of corruption,” arguing that AI can process bids impartially and eliminate political interference.
Critics called the move “ridiculous” and “unconstitutional,” while others saw a genuine test of whether automation could enhance transparency. Dr Andi Hoxhaj of King’s College London noted that properly designed algorithms could make contracting faster and more auditable — provided they remain open to scrutiny.
Rama admits there’s an element of spectacle but insists symbolism matters: Diella, he says, “puts pressure on other ministers to think differently.” In other words, Albania’s first “AI minister” is both performance and prototype.
Across these stories runs a shared tension: politicians are experimenting with AI to enhance efficiency or image, yet each case reveals public discomfort when algorithms cross into duties associated with judgement, empathy, or accountability.
The British MP’s digital double reframes representation as perpetual availability. Givan’s denial shows how the mere suspicion of AI authorship can delegitimise human agency. And Albania’s Diella illustrates both the appeal and the risk of automating moral responsibility under the banner of reform.
AI’s attraction for politics is clear. It promises consistency, speed, and incorruptibility — qualities often absent from human institutions. But governance relies not just on efficiency; it depends on trust and the perceived authenticity of leadership. A chatbot can answer a question, but it cannot stand for anyone in the civic sense of the term.
What emerges is a paradox: AI can improve transparency, yet its adoption in political roles can erode legitimacy if citizens feel replaced rather than served.
When Rama’s administration deploys Diella to track tenders, the algorithm is framed as a tool — a mechanism to enforce fair process. When a politician deploys an AI clone to communicate, the line blurs: the tool begins to speak as the elected person. That subtle difference transforms a technical instrument into a political actor.
As AI systems become more conversational and credible, the temptation to delegate visibility — social media posts, constituent replies, even debate preparation — will grow. Future parliaments may confront not whether AI can help, but how far it can substitute. The answer will depend less on software than on the constitutional and ethical boundaries societies are willing to enforce.
The three BBC stories capture politics at a hinge moment: experimenting with automation while fearing its implications. The Albanian government treats AI as a transparency mechanism; the Northern Ireland Assembly debates its legitimacy; and the UK MP frames it as outreach. Together, they hint at a near-future democracy mediated by code.
The question is no longer whether AI belongs in politics — it already does — but whether politicians can use it without surrendering the human accountability that legitimises them. For now, AI may help manage information or detect corruption. But when it begins to speak for politicians, democracy risks gaining efficiency at the cost of authenticity.
As these early experiments show, the greatest challenge for AI in politics is not technical but constitutional: deciding where representation ends and replication begins.