Why, then, do we not have greater faith in this technology?
One reason for this is that Cambridge University professor Gina Neff claims that there is a collectively very strong sense of “fairness.”
We believe that humans are much more adept at understanding the context right now than the machine is in many areas where AI is influencing our lives, she said.
The machine decides its decisions based on the set of rules it has programmed to apply. What’s the right call might not feel like the fair call, but people are really good at including multiple values and outside considerations as well.
It isn’t fair, in the opinion of Prof. Neff, to frame the debate as whether humans or machines are “better.”
She said, “We have to get the intersection right between people and systems.”
To make the best decisions, we must use the best of both.
The foundation of what is referred to as “responsible” AI is human oversight. Using the tech as fairly and securely as possible, in other words.
Someone must be monitoring what the machines are doing, somewhere.
Not that this is going to work in football, where the video assistant referee, VAR, has long drawn out controversy.
For instance, it was officially identified as a “significant human error” when a crucial goal was decided offside in a match between Tottenham and Liverpool in 2024, leading to a barrage of fury when it wasn’t and being declared a “significant human error.”
Even though chief football officer Tony Scholes acknowledged that “one mistake can cost clubs,” the Premier League claimed VAR was 96.4% accurate during “clear match incidents” last season. Norway is reportedly about to stop doing it.
A perceived lack of human control, according to entrepreneur Azeem Azhar, who writes the tech newsletter The Exponential View, contributes to our reluctance to rely on technology in general.
In an interview with the World Economic Forum, he said, “We don’t feel we have agency over its shape, nature, and direction.
Because systems we previously used don’t work as well in the new world of this new technology, it forces us to change our own beliefs very quickly.
Our apprehensions of technology extend beyond just to sports. The first time I watched a demo of an early AI tool that had been trained to identify early cancer signs from scans, it was incredibly accurate (it was years before the NHS trials today), far superior to human radiologists.
According to its developers, the problem was that those who were diagnosed with cancer did not want to learn that a machine had found the cancer. Before they agreed to it, they sought the consent of several human doctors, ideally several of them.
Similar to autonomous vehicles, which have millions of miles of driving distance on roads in nations like China and the US, and data indicates that they have fewer accidents than humans. However, a survey conducted by YouGov last year suggested that 37% of Brits would feel “very unsafe” inside one.
Although I didn’t feel unsafe when I went there, I did feel a little bored once the novelty had faded. And perhaps that is at the heart of the debate over tech in the sport of refereeing.
What [sports organisers] are trying to achieve, and what they are achieving using technology, according to golf Monthly’s general editor Bill Elliott.