Loading
Technology | AI

Artificial General Intelligence – Scary or exciting?

Words by Alex Matheson
Artificial General Intelligence – Scary or exciting?
Back

The concept of artificial intelligence (AI) is a hot button topic these days. From sci-fi writers to world renowned scientists, it seems like everyone has strong opinions on the subject; and not all of them are positive. Let’s take a look at what constitutes AI and what sets apart it’s more divisive offshoot, Artificial General Intelligence (AGI).

What is the difference between AI and AGI?

Put simply, AI is designed to solve specific, pre-learned problems. AGI is designed to mimic human cognitive abilities and solve any problem a human can to an equal degree or better.

AI gets thrown around a lot, and often incorrectly. It’s used as a catch-all term these days to describe anything within the sphere of machine learning. But in essence it mimics intelligence rather than is intelligent.

To paraphrase Richard Feyman in a 1985 lecture on AI, we won’t really have achieved AGI until a machine can think as well as, or better than, the best of people.

What are examples of AGI?

Right now, there are no extant examples of AGI. It exists purely in theory and fiction. While we have artificial intelligences that can do certain things very well, sometimes better than their human counterparts, we don’t have any systems that can think independently in a way that matches human thought.

Instead of looking at examples that exist, we instead have to look at potential types of AGI that might exist if we can reach the level of technological advancement required.

Limited memory machines

These actually do exist. In fact, almost anything that’s accurately described as having artificial intelligence these days falls into this category. Rather than just reacting to information provided, limited memory machines store information they’ve been given and can recall it.

This mimics learning to a certain extent, but is still limited in its functionality. While these machines can learn, in a way, they can only learn based on reference models written for them by humans. They have no ability to think outside of the box we’ve made for them.

Theory of mind machines

A theory of mind AI would theoretically be able to understand the entities it’s interacting with. This doesn’t necessarily mean it would have human level intelligence, more that it would be able to recognise that level of intelligence and interact with it in a way that seems human.

This is where the majority of AI research is focused right now. If you think about things you may have in your home, an Alexa or Google Home for instance. These things give the appearance of Theory of Mind machines, but are really just input-output devices. Similar to a search engine but voice activated.

If you’re Alexa could understand not only your question, but the reason you in particular were asking it, what the implications of giving you the answer might be, and how you might feel about the answer, that would be a Theory of Mind machine.

Self-aware machines

This would be the realisation of artificial general intelligence. A machine that can not only recognise a consciousness, but is also capable of recognising itself as a consciousness.

Beyond this level, the only advances would be in the power of the machine, from our perspective at least. A self-aware machine capable of understanding things that your or I might be able to would be one level. A machine capable of understanding things that Einstein or Newton was capable of would be another.

Is AGI possible?

The simple, and infuriating, answer is maybe. Nothing exists right now that comes close to realising the dream of AGI. There’s plenty of things that exist already that mimic it, and even more things that can do weak AI very well.

Actual, self-aware artificial intelligence is at least a few technological leaps away from us. We have the materials for it. In fact, some of the physical components we work with today are far more efficient than grey matter when it comes to computing speed.

The complexity involved in human thought is so vast that we haven’t even scratched the surface yet. We spoke with Steve Furber for the Atlas Podcast, the brain behind the SpiNNaker Project, a landmark project in neuromorphic computing. But even they, with 1 million processor cores at their disposal, can only match a mouse’s brain. Although we think that’s still quite impressive.

Should we be afraid of AGI?

It’s a good question and one that will elicit responses everywhere on the spectrum from ‘absolutely not’ to ‘yes, definitely’. Sci-fi has spent a lot of time telling us we should be terrified of realising true AI; from the scorched world of the Matrix to the killing machines of the Terminator franchise.

But there have been more benevolent fictional creations too. The “Minds” in the Culture novels written by Iain M. Banks might be pernicious, but they ultimately have humanity’s best interests at heart, for the most part.

In reality, real-world AGI might be a reflection of the bright minds that helped build it or it may take on the most malicious aspects of human intelligence. It may be something entirely different, and in that indifference do things we might consider evil, who knows?

One potential use for artificial general intelligence would be to create an intelligence within a simulated world. It’s a reasonable assumption and would certainly find many uses within many scientific disciplines.

With a strong enough computer you may be able to simulate many intelligences within a simulated world. With a Matryoshka brain, powered by a star, or suitably advanced quantum computer, you could probably simulate an entire planet’s worth of intelligent beings. A whole planet such as, perhaps, ours?

On that unsettling note, we’ll end our dive into the world of AGI. We hope you’ve enjoyed this little journey and please join us again for more interesting discussions that may, or may not, make you question your own existence!