The first indication of results on British election nights has earned an unusually high reputation. Those behind it say that’s because of a big decision 20 years ago.
A vote count in 2019. British general elections are conducted entirely with paper ballots, counted by hand in the hours after polls close at 10 p.m.Credit…Mary Turner for The New York Times
On election night, when can you start to know who’s won?
In Britain for the past two decades, there’s been a startlingly good answer just after the polls close, at 10 p.m.
That’s when three major broadcasters reveal the results of the national exit poll. The work of a team of statisticians and political scientists who swing into gear a few hours earlier, it has in recent years produced an increasingly accurate picture of the election results before the votes have been counted.
In the past five British general elections, the exit poll has predicted how many of the 650 or more parliamentary seats would be claimed by the winning party to within an average of four seats. Last time, in 2019, it had the winning party’s total just three seats out.
Here’s a guide to what to expect, and how it works.
Let’s start at the beginning. What’s an exit poll?
It’s a survey of voters soon after they’ve voted. The British one looks for voters literally as they exit a polling place: Fieldworkers ask over 20,000 people at about 130 voting sites across the country to fill in replica ballot papers. Since 2005, there’s been a single exit poll at each British general election, paid for by three major broadcasters, the BBC, ITV and Sky.
How have the British ones been so accurate?
They weren’t always. In the 1992 general election, the BBC’s exit poll predicted that no party would win an overall majority of parliamentary seats, before early results quickly showed that the Conservatives were on course to retain control. Exit polls in some earlier elections were even further off.
Exit polls have become increasingly accurate in Britain
Number of seats won by the largest party
The key change, those involved say, came in the 2000s, when the broadcasters pooled their resources behind a statistical approach pioneered by the academics David Firth and John Curtice. Its success has helped turn Professor Curtice into a star of election broadcasts.
Earlier exit polls sought to assemble a representative sample of voting places at each election, using the vote totals in the sample to predict shares for each party elsewhere.
The new-style poll still looks for a representative sample, but it also returns, as far as possible, to the same polling places each time. Now, instead of focusing on the totals, the researchers can make direct comparisons and examine how the vote has changed.
Using statistical models, they then project how the changes they find will play out in districts across the country, based on further analysis of the demographics and the previous election results in each area.
The focus on the same locations is the critical thing, according to Jouni Kuha, a professor of social statistics at the London School of Economics who has worked on the exit-polling team since 2010.
“There’s less noise in the data when you look at the changes than if you were trying to estimate the shares themselves,” he said in a telephone interview.
According to Professor Firth, not much has changed since the rethink of the early 2000s. “Even the software that I wrote back in 2001-2005 is still being used,” he said in an email.
So what could go wrong?
As with all statistical estimates, the British exit poll comes with a margin of error: about 20 seats.
In a tight race, 20 seats can be a lot. In 2015, after five years in a coalition government with the centrist Liberal Democrats, the center-right Conservatives unexpectedly won a small parliamentary majority. The poll that year underestimated their performance by 15 seats — within the expected margin of error, but enough to wrongly suggest that they might still need another party’s help to govern.
Opinion polls suggest this year’s race will not be close. Still, an element of luck remains. There is always the possibility that the polling stations selected become unrepresentative.
“People think there is some magic,” Professor Curtice told The New York Times recently. “But we are only as good as the data.”
In 2019, writing just before the exit poll coped with a major shift in Britain’s electoral map, Professor Firth noted: “There is nothing in the new methods that guarantees such freakish accuracy!”
The greatest challenge is time pressure. In Britain, most people vote in person on Election Day, and it’s a working day, so there’s a surge of votes in the early evening. That leaves a small window before 10 p.m. for the data to be collated and analyzed.
Redistricting across much of the country since 2019 could also prove to be a difficulty this time around.
Why doesn’t everywhere have an exit poll like this?
U.S. experts approach exit polls with caution, and there are good reasons for that.
America’s main exit poll, conducted by a consortium of news organizations — mainly broadcasters — and Edison Research, seeks to meet a broader set of aims under a significantly tougher set of conditions.
Instead of a single question on a mock ballot paper, the voters surveyed typically get 20 questions that gather demographic and issue data. The results are used to help project winners but also to facilitate a wider analysis of why people voted the way they did on election night.
And there’s a major barrier to replicating the British approach: Absentee and early voting is far more common in the United States. About 41 percent of votes were cast that way in 2016 and 70 percent in 2020, compared with 21 percent at Britain’s last election. The U.S. exit poll reflects this by using a phone, email and text survey as well as talking to voters in person.
“As much as our work in 2004, 2016, and 2020 has taken hits for specific errors in specific states and races, the overall average error in surveys is less than it was decades ago,” Joe Lenski, co-founder of Edison Research, noted in a 2021 interview with the American Enterprise Institute. “The real issue is educating about the kind of precision you can and can’t demand from these data.”
Complaints about exit polls are even louder elsewhere. India’s general election this year saw major stock market volatility and claims of electoral malpractice from the opposition after exit polls wrongly predicted a large majority for the incumbent B.J.P. Instead, the party was forced into a coalition government.
What difference does it make having a really good exit poll?
Election night is usually less turbulent, and a slice of viewers in Britain will switch off the TV coverage as soon as the exit poll is done.
But it can still yield entertaining moments. During the 2015 BBC election broadcast, the former Liberal Democrat leader Paddy Ashdown poured scorn on the projection that his party would be left with only 10 seats, down from 57. “If this exit poll is right,” he said, “I’ll publicly eat my hat on your program.”
In the end, the Liberal Democrats won eight seats, and the BBC presented Mr. Ashdown with a hat-shaped cake.
For Professor Kuha and team, the key moments have already taken place in the minutes before the 10 p.m. deadline. “It’s a very strange experience for an academic who is used to very different time scales,” he said. “So it’s sort of stressful but thrilling.”