‘Time for a reality check’: How close can artificial intelligence (AI) come to thinking like humans?

theIn the first month, Deepmind, a subsidiary of tech giant Alphabet, caused a stir in Silicon Valley when it announced Gato, perhaps the most versatile AI model in existence. Described as a “general agent”, Gato can perform over 600 different missions. He can drive a robot, annotate on photos, identify objects in photos, and more. Perhaps the most advanced AI system on the planet that is not dedicated to a single job. And for some computing experts, this is evidence that the industry is about to reach a long-awaited and intriguing milestone: artificial general intelligence.

Unlike regular AI, AI will not require huge data sets to learn a task. While ordinary AI must be pre-trained or programmed to solve a specific set of problems, general intelligence can learn through intuition and experience.

In theory, AI would be able to learn anything a human can learn, if it were given the same access to information. Basically, if you put AGI on a chip and then put that chip into a robot, the robot can learn to play tennis the same way you or I do: by swinging the racquet and learning about the game. This does not necessarily mean that the robot will be conscious or able to perceive. She won’t have thoughts or emotions, it would be really nice to learn to do new tasks without human help.

This would be huge for humanity. Think of all you could accomplish if you had a machine with the intellectual capacity of a human and the loyalty of a reliable canine companion—one that could be physically adapted to suit any purpose. This is the promise of artificial general intelligence. It’s C-3PO without the emotions, and Lt. Commander Data without the curiosity, and Rosey the Robot without the personality. In the hands of the right developers, it could embody the idea of ​​human-centered artificial intelligence.

But how close is the dream of artificial general intelligence? Is Gato really getting close to him?

For a certain group of scientists and developers (I’ll call this group the “Scaling-Uber-Alles” group, which adopts a term coined by world-renowned AI expert Gary Marcus) Gato and similar systems based on deep learning adapter models have already given us a blueprint for building AI. Essentially, these switches use huge databases and billions or trillions of modifiable parameters to predict what will happen next in a sequence.

The Scaling-Uber-Alles audience, which includes such high-profile names as OpenAI’s Ilya Sutskever and University of Texas at Austin’s Alex Dimakis, believes transformers will inevitably lead to AI. All that remains is to make it bigger and faster. As Nando de Freitas, a member of the team that created Gato, tweeted recently: “It’s all about scale now! Game over! It’s about making these models bigger, safer, more computationally efficient, faster sampling, and memory smarter…” De Freitas and the company realize they will have to create new algorithms and architectures to support this growth, but it seems Also they think AGI will come out on its own if we keep making models like Gato bigger.

Call me old-fashioned, but when a developer tells me their plan is to wait for AGI to magically emerge from the swamp of big data like a slush fish out of a primordial soup, I’m inclined to think they’re a few steps behind. Apparently, I’m not alone. A large number of pundits and scientists, Marcus included, have argued that something fundamental is missing in the grandiose plans to build Gato-like AI into generally intelligent machines.

I recently explained my reasoning in a trio of articles for The Next Web’s Neural Vert, where I serve as editor. In short, a major premise of artificial general intelligence is that it must be able to obtain its own data. But deep learning models, such as transformer AIs, are nothing more than machines designed to make inferences related to databases that have already been provided to them. They are librarians, and as such, they are only as good as their training libraries.

General intelligence can theoretically figure things out even if it has a small database. It would conjecture the methodology to accomplish its mission based on nothing more than its ability to pick out external data that was important and unimportant, like a human deciding where to pay attention.

Gato H

Gateau is great and there is nothing quite like it. But it is, at bottom, an arguably clever package that delivers the illusion of artificial general intelligence through expert use of big data. Its giant database, for example, probably contains datasets built on the entire contents of websites like Reddit and Wikipedia. It’s amazing that humans have been able to do so much with simple algorithms just by forcing them to analyze more data.

In fact, Gato is such a great way to fake general intelligence that it makes me wonder if we’d be barking up the wrong tree. It was once thought that many of the tasks that Gato can perform today are something that only an artificial general intelligence can do. It seems that the more we do with normal AI, the harder the challenge of building a generic agent seems to be.

For these reasons, I doubt that deep learning alone is the path to artificial general intelligence. I think we will need more than bigger databases and additional parameters to adjust. We will need an entirely new conceptual approach to machine learning.

I believe that humanity will eventually succeed in the quest to build artificial general intelligence. My best guess is that we’ll be knocking on AGI’s door sometime in the early to mid-2000s, and when we do, we’ll find that it looks very different from what the scientists at DeepMind imagine it to be.

But the nice thing about science is that you have to show your work, and now, DeepMind does just that. She has every chance to prove me and other opponents wrong.

I really, really hope you succeed.

Tristan Green is a futurist who believes in the power of human-centered technology. He is currently the Neural Futures Vertical Editor for The Next Web. Follow Tristan on Twitter Trustworthy

A version of this article originally appeared on Undark and is posted here with permission. Check out Undark on Twitter @employee

Leave a Comment