Wednesday, January 19, 2011

Ethical robots coming your way

via Ubergizmo



When it comes to the subject of ethics in robots, this is a  touching subject just because you can not very reduce human ethics to logical rules. Of coursework, there is this  small issue of robots liking logical rules to the hilt as well. Researchers at the University of Connecticut are currently trying to navigate through a  tricky situation, where they have been trying to conjoin machine learning with traditional ethical philosophy so that robots can behave ethically – or at least, in theory. Of coursework, the backbone of this approach is based on a technique pioneered by a philosopher named David Ross, who touts that people make ethical decisions basically through a careful balance of different variables against each other, such as ‘do nice,’ ‘don’t do harm,’ ‘keep your promises,’ ‘don’t be annoying,’ & other things in a similar manner. Robots are  nice with variables, so logically speaking, shouldn’t this be a snap to implement? Of coursework, the tricky part would be to program the right variables in a way where the robots can quantify.

No comments:

Post a Comment