Robots are teaching themselves parkour

In our mind, robots are robust, coordinated and powerful. They can help us do lots of jobs in our real world.

A running robot. Source: pixabay

How the robots learn to finish these complex actions? Have you ever thought about that we can let robots train themselves in a virtual reality and use the knowledge they learned?

Please watch the funny video about some trained robots by Google DeepMind:

We all know that gymnast can finish complex actions in the competitions and monkeys can jump in the woods at will. But how could we teach robots to do these complex actions? If we tell how to operate every motor in the robot, it would be a heavy job. Besides, robots would be confused and do not know how to adjust their actions for new tasks.

So, is there another way to do that? Can robots learn how to move by themselves?

The robots in the video are developed by Google, who is trying to solve this problem with artificial intelligence.

In the virtual space, a robot is built to simulate the robot in the real world. Scientists can transfer the knowledge of the cartoon robot to the real robot. In this way, lots of money and time can be saved.

The robots are trained under a simplified environment. The instructions for them is to finish some complex actions by moving their joints to interact with the environment. For example, jump and crunch under the different terrain. They are not taught how to finish these actions.

The only way they can learn is to try again and again. Every time, after they finish some actions, they will get a feedback from the system which will tell them how good the action is. For example, if they fall down, they will get a low score, while they finish a complex action, they will get a high score. This kind of training likes the way we train dogs at home. In computer science, this kind of method is called reinforcement learning, which means machines learn by feedbacks.

In order to speed up the learning process, in the developed version, the robots pre-studied some basic actions, such as walking, running and turning and so on. At present, they have produced some behaviors that are similar to human. In the experiment, they can adjust their actions to complete some other tasks, such as climbing stairs and bypassing the wall through navigation.

After that, scientists tried to update the model. Then, the robots could imitate the actions they observed. Now, they can encode the movement of each joint and move like a human, although there are lots of details need to improve.

The aim of this research is to teach robots how to finish complex actions and adjust new tasks like the human. The knowledge behind that is the key for the motor control tasks in the development of robot.

To get more details please read:

  1. Emergence of locomotion behaviors in rich environments:
  2. Learning human behaviors from motion capture by adversarial imitation:
  3. Robust Imitation of Diverse Behaviors:

4 Responses to “Robots are teaching themselves parkour”

  1. kangk1 says:

    That’s really a good idea. Maybe one day in the future, we can build a new robot just by scanning just like 3D print.

  2. Isabelle says:

    Wow, that Google video is crazy. It’s amazing what that technology can do with a few incentives and some serious RAM.

    I wonder if we could model movement of extinct species through this, just by scanning the structure of bones/joints and letting the computer run trial and error…

  3. kangk1 says:

    Thanks for reading, Richard. I think we will see some great changes in robot industry just like the smart phone.

  4. Richard Proudlove says:

    Engaging article. AI seems to be developing in “leaps and bounds” (excuse the pun). I realised whilst reading your blog that “reinforcement learning” is exponential, so I suspect that we are going to see some very big “leaps” in AI and robotics in our lifetime.