Skynet is here thanks to robotics

Hello Skynet: Robots Can Now Realise What They Don’t Know and Learn it Themselves

2 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

Robots are cool, but they are not very smart… yet. But in the latest episode of humans hurtling towards a SkyNet situation, robots are being taught to learn on their own. 

At the moment, most robots can only do what humans tell them to do, and they don’t know how to improve themselves. If a robot makes a mistake, it doesn’t learn from it. It just repeats the same error over and over again. That’s boring and frustrating for robotics engineers. And for anyone who owns a malfunctioning Roomba

But what if robots could learn from their own mistakes, just like humans do? What if they could adapt to new situations and challenges, and become better at their tasks? 

Engineering boffins at Princeton University, along with Google employees, have been working on this very thing. 

They have developed a new method that allows robots to learn from their own trial and error, without any human supervision.

The researchers made their algorithm ask for human help when the choices are too close to call. For example, the robot is not sure if it should put the plastic bowl or the metal bowl in the microwave. Both choices have a similar chance of being right, so the robot asks the human to decide. Is this the first step towards a SkyNet situation?

How its done and how close are we to SkyNet?

The method is called Meta-World. First, the researchers create a virtual world where robots can practise different skills, such as opening doors, picking up objects, or pushing buttons. Then, they let the robots explore the world and try different actions, without telling them what to do or how to do it. 

The robots get feedback from the world, such as whether they succeeded or failed, or how far they are from their goal. 

Based on this feedback, the robots update the rules that guide their behaviour. 

Over time, the robots learn to perform the skills better and faster, and they can transfer their knowledge to new situations and environments.

The researchers tested their method on a real robot arm, and they found that it could learn to open a door, grasp a mug, and turn a valve, after only a few hours of self-training in the virtual world. 

The robot could also adapt to changes in the real world, such as different door handles, mug shapes, or valve positions.

The researchers hope their method will enable robots to learn more complex and diverse skills, and become more autonomous and intelligent. 

They also hope their method will make it easier for humans to teach robots new tasks, without having to program or demonstrate them.

So…do we need to ask Kyle Reese to come back from a future where robots have nearly wiped out humanity? Cos this all def has Cyberdyne Systems vibes.