If we use robots in the future, we have to make sure they behave. Scientists are now enmeshed in philosophical discussions about this.
When the legendary science fiction writer Isaac Asimov penned the “Three Laws of Responsible Robotics,” he forever changed the way humans think about artificial intelligence, and inspired generations of engineers to start creating robots. They have a more realistic attitude about the machines they are creating, so they have rewritten robot “laws.”
Engineer David Woods says, “When you think about it, our cultural view of robots has always been anti-people, pro-robot. The philosophy has been, ‘Sure, people make mistakes, but robots will be better, a perfect version of ourselves.’ We wanted to write new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.
In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain. We are far from a time when humans may teach robots a moral code and responsibility.
Woods says, “Robots exist in an open world where you can’t predict everything that
NOTE: This news story, previously published on our old site, will have any links removed.
Subscribers, to watch the subscriber version of the video, first log in then click on Dreamland Subscriber-Only Video Podcast link.