单选题
In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams 'Save her! Save her!' the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah's 11 percent. The robot's decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make? Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—-they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it. The robot who rescues Spooner's life in I, Robot follows Asimov's zeroth law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save other. Whether it's possible to program a robot with safeguards such as Asimov's laws is debatable. A word such as 'harm' is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov's fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations. Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies (替身) called 'H-bots' from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both 'die.' The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
单选题
What question does the example in the movie raise?
单选题
What does the author think of Asimov's three laws of robotics?
【正确答案】
D
【答案解析】定位:题干询问的是作者对于三大法则的观点,根据信息词Asimov和three laws of robotics可定位到文章的第二段。 第二段首句直接表明观点:艾萨克·阿西莫夫在设计他的机器人三大法则时回避了所有道德观念。题干中的did not take moral issues into consideration是对原文evaded the whole notion of morality的同义转述,因此D项正确。A项和C项原文未提及,故排除。选项B的干扰源来自第四段,第四段首句指出,给机器人编入像阿西莫夫法则这样的程序作为防护措施是否可能,这是值得商榷的,第二句指出抽象的概念呈现了编码存在的问题,但并未提及这些机器人没有遵循机器人编码体系,故B项错误。
单选题
What does the author say about Asimov's robots?
单选题
What has the roboticist at the Bristol Robotics Laboratory found in his experiment?
【正确答案】
C
【答案解析】定位:根据题干的信息词the roboticist at the Bristol Robotics Laboratory,可将答案定位到最后一段。 最后一段具体描述了实验中机器人在简单和复杂情况下的表现,当情况变得复杂时,机器人有42%的时间在愣神,导致解救失败,由此可知机器人在复杂的情况下无法做出决定,故C项正确。最后一句提到该实验强调了道德的重要性,但并未提及现如今已能够将道德问题编入机器人的程序中,故B项错误。A项和D项文中未提及,故排除。