27 September 2023

Intelligence test: Will Artificial Intelligence ever be the real thing?

Start the conversation

James Vincent* says a new book reveals just how divided the experts are on when (or if) artificial general intelligence will be achieved.

At the heart of the discipline of artificial intelligence (AI) is the idea that one day we’ll be able to build a machine that’s as smart as a human.

Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study.

It also makes it clear that true AI possesses intelligence that is both broad and adaptable.

To date, we’ve built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power.

But, despite the centrality of this idea to the field of AI, there’s little agreement among researchers as to when this feat might be achievable.

In a new book published last month titled Architects of Intelligence, writer and futurist Martin Ford (pictured) interviewed 23 of the most prominent people working in AI today, including DeepMind CEO, Demis Hassabis, Google AI Chief, Jeff Dean, and Stanford University AI Director, Fei-Fei Li.

Ford asked each to guess by which year there will be at least a 50 per cent chance of AGI being built.

Of the 23 people Ford interviewed, only 18 answered, and of those, only two went on the record.

Interestingly, those two provided the most extreme answers: Ray Kurzweil, a futurist and Director of Engineering at Google, suggested the date of 2029, and Rodney Brooks, roboticist and co-founder of iRobot, went for 2200.

The rest of the guesses were scattered between these two extremes, with the average estimate being 2099 — 81 years from now.

In other words: AGI is a comfortable distance away, though you might live to see it happen.

This is far from the first survey of AI researchers on this topic, but it offers a rare snapshot of elite opinion in a field that is currently reshaping the world.

Ford says it’s particularly interesting that the estimates he gathered skew toward longer time frames rather than earlier surveys, which tend to fall closer to the 30-year mark.

Ford says that his interviews also revealed an interesting divide in expert opinion — not regarding when AGI might be built, but whether it was even possible using current methods.

Some of the researchers Ford spoke to said we have most of the basic tools we need; others said we’re still missing a great number of the fundamental breakthroughs needed.

Notably, says Ford, researchers whose work was grounded in deep learning (the subfield of AI that’s fuelled this recent boom) tended to think that future progress would be made using neural networks, the workhorse of contemporary AI.

Those with a background in other parts of AI felt that additional approaches, like symbolic logic, would be needed to build AGI.

“Some people in the deep learning camp are very disparaging of trying to directly engineer something like common sense in an AI,” says Ford.

“One of them said it was like trying to stick bits of information directly into a brain.”

All of Ford’s interviewees noted the limitations of current AI systems and mentioned key skills they’ve yet to master.

These include transfer learning, where knowledge in one domain is applied to another, and unsupervised learning, where systems learn without human direction.

Interviewees also stressed the sheer impossibility of making predictions in a field like AI where research has come in fits and spurts and where key technologies have reached their full potential only decades after they were first discovered.

Stuart Russell, a professor at the University of California who wrote one of the foundational textbooks on AI, said the sort of breakthroughs needed to create AGI have “nothing to do with bigger datasets or faster machines,” so they can’t be easily mapped out.

“I always tell the story of what happened in nuclear physics,” Russell said in his interview.

“The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms.”

“But … the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons!”

“In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive.”

Ford says this basic unknowability is probably one of the reasons the people he talked to were so reluctant to put their names to their guesses.

“Those who did choose shorter time frames are probably concerned about being held to it,” he says.

Opinions were also mixed on the dangers posed by AGI.

Oxford University Professor Nick Bostrom was someone who had strong words about the potential danger, saying AI was a greater threat than climate change to the existence of the human race.

He and others said that one of the biggest problems in this domain was value alignment — teaching an AGI system to have the same values as humans.

“The concern is not that [AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel,” said Bostrom, “but rather that it would be very competently pursuing an objective that differs from what we really want.”

Most interviewees said the question of existential threat was extremely distant compared to problems like economic disruption and the use of advanced automation in war.

Barbara Grosz, a Harvard AI professor, said issues of AGI ethics were mostly “a distraction.”

“The real point is we have any number of ethical issues right now, with the AI systems we have,” said Grosz.

“It’s unfortunate to distract attention from those because of scary futuristic scenarios.”

This sort of back-and-forth, says Ford, is perhaps the most important takeaway from his book: there really are no easy answers in a field as complex as AI.

Even the most elite scientists disagree about the fundamental questions and challenges.

“The whole field is so unpredictable,” says Ford.

So what hard truths can we cling to?

Only one, says Ford.

Whatever happens next with AI, “it’s going be very disruptive.”

* James Vincent is a reporter for The Verge. He tweets at @jjvincent.

This article first appeared at www.theverge.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.