in

The Future of Artificial Intelligence: Artificial General Intelligence

To gain a true understanding of AI, researchers should turn their attention to developing a fundamental

Potentially AGI technique that replicates human understanding of the environment.

Industry giants like Google, Microsoft, and Facebook, research labs like Elon Musk’s OpenAI, and even platforms like SingularityNET are betting on artificial general intelligence (AGI) — the ability of intelligent agents to understand or learn any intellectual task that humans can’t. the future of artificial intelligence technology.

However, it is somewhat surprising that none of these companies has focused on developing a basic, low-level AGI technique that replicates human contextual understanding. This may explain why the research these companies are doing is entirely reliant on intelligent models with varying degrees of specificity and relying on today’s AI algorithms.

Unfortunately, this reliance means that, at best, AI can only exhibit intelligence. No matter how impressive their abilities are, they still follow predetermined scripts that contain many variables. Therefore, even large, highly complex programs such as GPT3 or Watson can only demonstrate comprehension. In fact, they do not understand that words and images represent physical things that exist and interact in the physical universe. The notion of time or the idea of ​​a cause having an effect is completely foreign to them.

This is not to take away the capabilities of today’s AI. For example, Google is able to scour vast amounts of information incredibly fast to provide the results users want (at least most of the time). Personal assistants like Siri can make restaurant reservations, find and read emails, and give directions in real time. This list is constantly expanding and improving.

But no matter how sophisticated these programs are, they are still looking for inputs and making specific output responses that depend entirely on their core dataset. If not convinced, ask a customer service bot an “unscheduled” question, which may generate a meaningless response or no response at all.

In conclusion, Google, Siri, or any other current AI examples lack true, common-sense understanding, which will ultimately prevent them from progressing toward Artificial General Intelligence. The reason goes back to the main assumption of most AI developments over the past 50 years, that if hard problems can be solved, easy problems of intelligence can be solved. This hypothesis can be described in terms of Moravec’s Paradox, which argues that it is relatively easy to get computers to perform at an adult level on intelligence tests, but give them the ability to perceive and act as a one-year-old baby. skills are difficult.

AI researchers are also wrong in their assumption that, if enough applications of narrow AI are built, they will eventually grow into general intelligence together. Unlike the way children can effortlessly integrate vision, language, and other senses, AI applications in the narrow sense cannot store information in a common way that allows information to be shared and subsequently used by other AI applications.

In the end, researchers mistakenly believe that if a large enough machine learning system and enough computer power can be built, it will spontaneously exhibit general intelligence. This has also been proven wrong. Just as expert systems trying to acquire domain-specific knowledge cannot create enough case and example data to overcome a potential lack of understanding, AI systems cannot handle “unplanned” requests, no matter how large.

General Artificial Intelligence Fundamentals
To gain true AI understanding, researchers should turn their attention to developing a fundamental, potentially AGI technique that replicates human understanding of context. For example, consider the situational awareness and situational understanding of a 3-year-old child playing with blocks. 3-year-olds understand that blocks exist in a three-dimensional world, have physical properties such as weight, shape, and color, and can fall off if stacked too high. Children also understand the concepts of cause and effect and the passage of time, as blocks cannot be knocked down until they are first stacked.

A 3 can also become a 4, then a 5, and finally a 10, and so on. In short, 3-year-olds are born with abilities, including the ability to grow into fully functional, generally intelligent adults. Such growth is impossible with today’s AI. No matter how sophisticated it is, today’s AI is still completely unaware of its presence in its environment. It does not know that actions taken now will affect actions in the future.

While it is unrealistic to think that an AI system that has never experienced anything other than its own training data can understand real-world concepts, adding a mobile sensory pod to an AI could allow artificial entities to learn from real-world environments and demonstrate develop a fundamental understanding of physical objects, causality, and the passage of time in reality. Like the 3-year-old, this artificial entity equipped with a sensory pod was able to directly learn how to stack blocks, move objects, perform a series of actions over time, and learn from the consequences of those actions.

Through sight, hearing, touch, manipulators, etc., artificial entities can learn to understand in ways that are simply not possible with pure text or pure image systems. As mentioned earlier, such systems simply cannot understand and learn, no matter how large and varied their datasets are. Once the entity has acquired this ability to understand and learn, it is even possible to remove the sensory pods.

While at this point we cannot quantify how much data is needed to represent true understanding, we can speculate that there must be a reasonable proportion of the brain associated with understanding. After all, humans interpret everything in the context of everything they have already experienced and learned. As adults, we use what we learn in the first few years of life to explain everything. With this in mind, it seems likely that true artificial general intelligence will only fully emerge if the AI ​​community recognizes this fact and takes the necessary steps to build a fundamental foundation of understanding.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Zuckerberg’s latest VR prototype is here to confuse virtual and reality

Scientists create tiny fish-shaped robot that ‘swims’ around picking up microplastics