27 September 2023

Brain strain: Why there’s still a way to go for artificial general intelligence

Start the conversation

Ben Goertzel* says AI that can generalise to unanticipated domains and confront the world as an autonomous agent is still part of the road ahead.


Photo: Franck V.

In the 15 years since I first introduced the term “artificial general intelligence” (AGI), the AI field has advanced tremendously.

We now have self-driving cars, automated facial recognition, machine translation and so much more.

However, these achievements remain essentially in the domain of “narrow AI” — AI that carries out tasks based on specifically supplied data or rules.

AI that can generalise to unanticipated domains and confront the world as autonomous agents are still part of the road ahead.

The question remains: what do we need to do to get from today’s narrow AI tools, which have become mainstream in business and society, to the AGI envisioned by futurists and science fiction authors?

The diverse proto-AGI landscape

There is nothing resembling an agreement among experts on the path to AGI.

For example, Google DeepMind’s chief founder, Demis Hassabis has long been a fan of relatively closely brain-inspired approaches to AGI.

The OpenCog AGI-oriented project that I co-founded in 2008 is grounded in a less brain-oriented approach — it involves neural networks and evolutionary program learning.

The bottom line is, just as we have many different workable approaches to manned flight, there may be many viable paths to AGI.

Today’s AGI pioneers are proceeding largely via experiment and intuition, in part because we don’t yet know enough useful theoretical laws of general intelligence to proceed with AGI engineering in a mainly theory-guided way.

Four (not actually so) simple steps from here to AGI

In a recent talk, I outlined “Four Simple Steps to Human-Level AGI.”

The title was intended as dry humour, as none of the steps is simple.

But I do believe they are achievable within our lifetime.

I don’t believe we need radically better hardware, nor radically different algorithms, nor new kinds of sensors or actuators.

We just need to use our computers and algorithms in a slightly more judicious way by doing the following.

1) Make cognitive synergy practical

We have a lot of powerful AI algorithms today, but we don’t use them together in sufficiently sophisticated ways, so we lose much of the synergetic intelligence that could come from using them together.

By contrast, the different components in the human brain are tuned to work together with exquisite feedback and interplay.

We need to make systems that enable richer and more thorough coordination of different AI agents at various levels into one complex, adaptive AI network.

2) Bridge symbolic and subsymbolic AI

I believe AGI will most effectively be achieved via bridging of the algorithms used for low-level intelligence, such as perception and movement (e.g., deep neural networks), with the algorithms used for high-level abstract reasoning (such as logic engines).

Deep neural networks have had amazing successes lately in processing multiple sorts of data, including images, video, audio, and to a lesser extent, text.

However, it is becoming increasingly clear that these particular neural net architectures are not quite right for handling abstract knowledge.

My own intuition is that the shortest path to AGI will be to use deep neural nets for what they’re best at and to hybridise them with more abstract AI methods like logic systems, to handle more advanced aspects of human-like cognition.

3) Whole-organism architecture

Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.

General intelligence does not require a human-like body, nor any specific body.

However, if we want to create an AGI that manifests human-like cognition in particular and that can understand and relate to humans, this AGI needs to have a sense of the peculiar mix of cognition, emotion, socialisation, perception, and movement that characterises human reality.

By far the best way for an AGI to get such a sense is for it to have the ability to occupy a body that at least vaguely resembles the human body.

The need for whole organism architecture ties in with the importance of experiential learning for AGI.

In the mind of a human baby, all sorts of data are mixed up in a complex way, and the objectives need to be figured out along with the categories, structures, and dynamics in the world.

Ultimately, an AGI will need to do this sort of foundational learning for itself as well.

4) Scalable meta-learning

AGI needs not just learning but also learning how to learn.

An AGI will need to apply its reasoning and learning algorithms recursively to itself so as to automatically improve its functionality.

Ultimately, the ability to apply learning to improve learning should allow AGIs to progress far beyond human capability.

At the moment, meta-learning remains a difficult but critical research pursuit.

Toward beneficial general intelligence

If my perspective on AGI is correct, then once each of these four aspects is advanced beyond the current state, we’re going to be there — AGI at the human level and beyond.

I find this prospect tremendously exciting, and just a little scary.

Some observers, including big names like Stephen Hawking and Elon Musk, have expressed more fear than excitement.

I think nearly everyone who is serious about AGI development has put a lot of thought into the mitigation of the relevant risks.

One conclusion I have come to via my work on AI and robotics is: if we want our AGIs to absorb and understand human culture and values, the best approach will be to embed these AGIs in shared social and emotional contexts with people.

We must also work to ensure AI develops in a way that is egalitarian and participatory across the world economy, rather than in a manner driven by the bottom lines of large corporations or the military.

If an AGI emerges from a participatory “economy of minds” of this nature, it is more likely to have an ethical and inclusive mindset coming out of the gate.

We are venturing into unknown territory here.

Let us do our best to carry out this next stage of our collective voyage in a manner that is wise and cooperative as well as clever and fascinating.

* Dr Ben Goertzel is the CEO of SingularityNET, a blockchain-based AI platform company, and Chief Scientist at Hanson Robotics. He tweets at @bengoertzel. His website is goertzel.org.

This article first appeared at singularityhub.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.