‘Empowerment’ to Help Program Better Ethical Behavior of Robots

0
115

Robots are increasingly becoming common in our workplaces and homes and the trend looks set to go on for the coming years. Many robots will need to interact with people in unknown situations. For instance, self-driven cars need to protect their occupants along with preventing any kind of car damage or robots programmed to take care of elderly people will require to adjust to complex situations and produce a response suitable for their owner’s risk.

In 1942, Isaac Asimov, a science fiction writer, suggested his three fundamental laws of robotics which monitor how robots interact with common people. These laws mention that a robot should not allow a human to be harmed or hurt a human. It also aims to make sure that robots accept and follow the order given to them along with protecting their own existence without endangering humans.

One primary problem in these laws is the concept of ‘harm’. It is context-specific, complex term that is difficult to explain to a robot. If a robot fails to understand ‘harm’, it is almost impossible for them to avoid causing it. It thus becomes important to create a ‘good’ robot behavior using different perspectives and broadly following Asimov’s laws.

Researchers at the Hertfordshire University in the U.K. have come up with a new concept called ‘Empowerment’ that helps robots to serve and protect humans around while maintaining ensuring their own safety. The concept relies on robots always looking to keep their choices open. The concept of Empowerment is mathematically coded so that it is robots can adopt it. The developers wanted robots to experience the world from a human’s perspective that it interacts with. The aim of the Empowerment concept is to power robots that follow Asimov’s laws and form a vital part in programming the overall ethical demeanor of robots.

Leave a Reply