Saturday 20 April 2013

EPRSC / AHRC principles of robotics

In 2011, the Engineering and Physical Sciences Research Council (EPRSC) and the Arts and Humanities Research Council (AHRC) of Great Britain jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:
  1. Robots should not be designed solely or primarily to kill or harm humans.

  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.

  3. Robots should be designed in ways that assure their safety and security.

  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.

  5. It should always be possible to find out who is legally responsible for a robot.

The messages intended to be conveyed were:
  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.

  2. Bad practice hurts us all.

  3. Addressing obvious public concerns will help us all make progress.

  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.

  5. To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.

  6. We should consider the ethics of transparency: are there limits to what should be openly available?

  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.

Laws of Robotics

Isaac Asimov, who is taken into account to be the daddy of artificial intelligence, projected three "Laws of Robotics" in 1942, later adding the zeroth Law:

Law 0: A robot may not injure humanity or through inaction, allow humanity to come to harm

Law 1: A robot may not injure a human being or through inaction, allow a human being to come to harm, unless this would violate a higher order law

  Law 2: A robot must obey orders given to it by human beings, except where such orders would conflict with a higher order law

Law 3: A robot must protect its own existence as long as such protection does not conflict with a higher order law

© 2013 Robotics World. Powered by Blogger.

Follow us on facebook

Contact Form

Name

Email *

Message *

Latest News

Total Pageviews