Doomsday narratives surround artificial intelligence.
But what should we really worry about: enslavement by robotic overlords, or the gap of global inequality widening ever more?
Bee Taylor, an Electronic Engineering PhD student at York, researches AI. In their own words, the “preconceptions” that AI is going to kill us in some sort of “robot uprising” is majorly premature.
“I gotta say,” Bee commented, “none of this AI stuff works anywhere near as well as it sounds, quite often it just doesn’t work at all.”
Bee’s research topic is revealing of the proliferation for one-trick pony AIs. Their PhD is in “metalearning”.
“Normally,” Bee explains, “an AI will learn how to solve one problem, which involves repeatedly trying to solve the same problem and, hopefully, slowly improving over time. But once the AI is trained, that’s pretty much it, you make an AI learn how to drive a car and decide you want it to drive a truck instead? You gotta start again from the beginning.”
“Instead of teaching an AI how to solve one problem, meta-learning is all about getting it to learn for future similar problems, so you can teach it general skills for learning how to drive and then it’ll be able to learn later in life how to solve different, related problems. My research is into a new algorithm I’m developing for meta-learning.”
In layman’s terms: Bee is teaching robots how to evolve. But, here’s where the ethics kicks in. What happens when Artificial General Intelligence is taught how to evolve? AGI, not AI, is the type of super-intelligent robot nearer to a Blade Runner Replicant than a self-driving car.
According to one theory, as soon as robots figure out that the most efficient way of completing a task is by disabling its off-switch, we’re done for.
Berkeley AI professor Stuart Russell argues that robots should be taught to doubt their task objectives. That, or we end the robotic enterprise early.
However, even alarmist Russell predicts it will be around 80 years before AGI is around (but stresses the timing is impossible to predict).
The UK has declared an interest in taking the lead in the ethics of AI and AGI, but right now there are more immediate effects of AI that Bee brought up, including economic impact on workers. Bee said: “I honestly believe the automation of a large amount of the work-force can ultimately be a good thing, as long as we simultaneously alter society such that these changes don’t just hurt the working class.”
The think tank McKinsey and Company simulated: “Around 13% of the total wage bill could shift to categories requiring non-repetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages.”
Globally, the effects will be even starker. AI leaders (mostly in developed countries) “could capture an additional 20 to 25% in net economic benefits compared with today, while developing countries may capture only about 5 to 15%.”
Effects such as these could widen global inequality, fueling conflict and poverty.
Besides all the good AI may bring, some of the worst effects might be economic rather than ethical. So what do we really need to be worrying about – robots ruining the world, or humans?
This article was originally published in print on 25/02/2020