单选题   In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams 'Save her! Save her!' the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah's 11 percent. The robot's decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?
    Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—-they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it.
    The robot who rescues Spooner's life in I, Robot follows Asimov's zeroth law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save other.
    Whether it's possible to program a robot with safeguards such as Asimov's laws is debatable. A word such as 'harm' is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov's fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
    Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies (替身) called 'H-bots' from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both 'die.' The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
单选题     What question does the example in the movie raise?
 
【正确答案】 A
【答案解析】定位:根据题干的信息词question和movie,可将答题线索定位到第一段。 第一段讲述了电影中的情节:机器人根据计算结果救起了存活几率大的史普纳,放弃了存活几率小的女孩,而史普纳在被救过程中一直大喊救孩子。第一段最后两句对该机器人的决定和计算方法进行了反思:人类会怎样选择,人类想要机器人做出怎样的选择。选项A是对此段内容的概括,为正确答案。文章没有提及机器人判断错误,因此C项不可选。第三段第一句明确指出电影中的机器人遵循的是阿西莫夫的第零法则,故B项错误。D项文中未提及,故排除。
单选题     What does the author think of Asimov's three laws of robotics?
 
【正确答案】 D
【答案解析】定位:题干询问的是作者对于三大法则的观点,根据信息词Asimov和three laws of robotics可定位到文章的第二段。 第二段首句直接表明观点:艾萨克·阿西莫夫在设计他的机器人三大法则时回避了所有道德观念。题干中的did not take moral issues into consideration是对原文evaded the whole notion of morality的同义转述,因此D项正确。A项和C项原文未提及,故排除。选项B的干扰源来自第四段,第四段首句指出,给机器人编入像阿西莫夫法则这样的程序作为防护措施是否可能,这是值得商榷的,第二句指出抽象的概念呈现了编码存在的问题,但并未提及这些机器人没有遵循机器人编码体系,故B项错误。
单选题     What does the author say about Asimov's robots?
 
【正确答案】 B
【答案解析】定位:题干的信息词Asimov's robots在第二、三、四段均有出现,需要逐段阅读寻找答案。 第二段第一句指出了阿西莫夫的机器人三大法则,其中第一法则提到机器人不能伤害人类,或坐视人类受到伤害。第二段最后三句说:“这些法则被编入阿西莫夫的机器人……它们不必喜欢人类,也不必认为伤害他们是错的或是坏事。它们根本不这样做。”选项B是对此的概括总结,故为正确答案。也由此可知选项A是错误的,因为它们被设计成不需要知道什么是对人类有利或有害的;三大法则中并未提及机器人要按照主人的最大利益来履行职责,故C项也不可选。选项D的干扰源来自最后一段,该段描述当涉及道德问题时,布里斯托尔机器人实验室的一位机器人学家的机器人有40%的时间无法做出决定,而不是阿西莫夫的机器人,故排除D项。
单选题     What does the author want to say by mentioning the word 'harm' in Asimov's laws?
 
【正确答案】 A
【答案解析】定位:根据题干的信息词“harm”和Asimov's laws,可将答案定位到第四段。 第四段以“伤害”一词为例,指出抽象的概念呈现了编程存在的问题——抽象的概念本身还没有清楚的界定,将其编程就很难进行。因此A项是正确答案。D项是本段描述的内容,不是作者想要表达的意图,故排除。B项和C项该段未提及,属于无关选项,故排除。
单选题     What has the roboticist at the Bristol Robotics Laboratory found in his experiment?
 
【正确答案】 C
【答案解析】定位:根据题干的信息词the roboticist at the Bristol Robotics Laboratory,可将答案定位到最后一段。 最后一段具体描述了实验中机器人在简单和复杂情况下的表现,当情况变得复杂时,机器人有42%的时间在愣神,导致解救失败,由此可知机器人在复杂的情况下无法做出决定,故C项正确。最后一句提到该实验强调了道德的重要性,但并未提及现如今已能够将道德问题编入机器人的程序中,故B项错误。A项和D项文中未提及,故排除。