Watch an AI robot program itself to, er, pick things up and push them around • The Register

Why can’t robots just learn to do things without being told?

Engineers at Vicarious AI, a robotics startup based in California, USA, have built what they call a “Visual cognitive computer”, a software platform connected to a camera system and a robot gripper.

Given a set of visual clues, the VCC writes a short program of instructions to be followed by the robot so it knows how to move its gripper to do simple tasks.

The VCC works out what commands need to be performed by the robot in order to organise the range of objects before it, based on the ‘before’ to the ‘after’ image.

The left hand side is a schematic of all the different parts that control the robot.

The robot doesn’t need too many training examples to work because there are only 24 commands, listed on the right hand of the diagram, for the VCC controller.

The physical objects laid out in front of the robot do not need to be explicitly the same as the abstract ones represented in the input and output images.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *