In the world of film and television, human beings are even more imagining the various possibilities of robots, bringing technological romanticism to the extreme, from “WALL-E” to “Super Marines”, from “Westworld” to “Finch”. Stories are being told of robots that have a human appearance, or have superhuman thinking abilities, or try to build their own emotions.
In the real world, although robots are far from being highly intelligent as shown in movies, in fact, various related industries and enterprises have been exploring ways to make robots more “smart”. Surprisingly, gaming technology is also playing a role in this quest.
In recent years, the “skill tree” of robots has gradually flourished, from being able to complete a simple single action in the early days, to having multiple senses such as force, touch, hearing, etc., to being able to perform multiple complex tasks at the same time, and even trying to learn “think”.
Zhang Zhengyou, Chief Scientist of Tencent, Director of Tencent AI Lab and Tencent Robotics X Lab, summarized the core technologies of intelligent robots as A2G: A is artificial intelligence (AI), B is robot body, C is control, and D is developmental learning. , E is emotional intelligence, and F is dexterity. Through learning ability, emotional ability, manipulation ability and interaction with other elements, the robot will become G, the guardian angel of mankind.
Learn, execute, plan.
It’s easy to say, but there are still quite a few problems ahead to achieve true intelligent advancement.
Just as human beings gradually build up their thinking ability through learning, practice, trial and error in the process of growing up, researchers hope to set a goal for the robot, and let it learn to perceive and adapt to the dynamic environment by designing a reasonable reward mechanism. Variety.
However, doing this kind of training in a real scene will be very “expensive”, because once the physical robot bumps, it is easy to damage the hardware, not to mention a lot of data and training time.
Coincidentally, with the iterative upgrade of game products, NPCs (non-player characters) seem to be becoming more and more “smart”.
In today’s games, “state machines” are the most common intelligent simulation scheme, from the ghosts of “Pac-Man”, to the motorcycle riders of “Motor Bike”, and then to “Red Dead Redemption” can interact with players everywhere With the evolution and iteration of the game, the state machine intelligence has been able to distinguish between true and false.
But even “Red Dead Redemption 2”, which makes NPCs realistic enough, only achieves “motion matching”. Its technical director recalled in an interview that they had designed hundreds of different motion animations for the horses, and even panting had hundreds of different sounds. And these are not real intelligence, but the result of stacking huge behavior trees and animation resources.
So how does one lead to true intelligence? How to achieve higher-quality virtual characters, so that they can not only have more natural and realistic action performance, but also significantly reduce the development cost?
Faced with the common core topic of game and robot development, Tencent’s game technology team and robot team have carried out cooperative research and development of intelligent body action generation technology, and applied and trained NPCs in the virtual environment of games, so that NPCs can learn through continuous self-learning. More realistic movements, reactions and expressions.
During the research, the cooperative team realized that the vast amount of technical experience accumulated in the game and the training conditions of virtual simulation can provide help for the intelligent research and development of robots, and at the same time help solve the two major research and development pain points of high cost and difficult optimization.
During the cooperation process, the robotics department headed by Tencent Robotics X Lab is responsible for designing core algorithms, including defining the task environment and goals, building and training AI algorithms, building the overall framework of the robot intelligent control system, and deploying real machines.
The game and AI department, which is mainly based on Tianmei J3 Studio, Tianmei Technology Center, the START team of Tencent Interactive Entertainment, and Tencent AI Lab, is mainly based on the intelligent body action generation technology based on the game NPC action simulation, which helps to realize the autonomous decision-making of robots and adapt to them. Equipped with different scenes; at the same time, it provides efficient and realistic virtual simulation capabilities, such as environment scene construction, core physics engine acceleration, etc., to enhance the efficiency and speed of robot training.
The intelligent body action generation technology and real-time physical simulation technology are regarded by the project technical team as important technical directions for the future research and development of intelligent robots.
As a comprehensive technology platform, games provide ideal research environments and application scenarios for these technologies, and play an important role in promoting AI and robotics research in perception, decision-making, control, and computing.
The launch of a number of game technology projects, such as the Digital Great Wall, the digital central axis, the fully connected digital factory, and the joint research and development of a full-motion flight simulator visual system, let us see that the scope of Tencent’s cross-border exploration of game technology is expanding.
In this upsurge, the technical attributes of games, such as interactivity, high simulation, strong immersion, and real-time rendering, have been further magnified, and spilled over to more valuable real-world scenarios in scientific research, entertainment, education, and medical care, serving the specific society. Propositions lead to innovative solutions and also make important contributions to technological advancements in many other fields.
GIPHY App Key not set. Please check settings