阅读理解   As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges. But the more difficult challenges may have to do with ethics.
    Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Serf-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.
    Within the next few decades, our stores, streets and sidewalks will likely be crammed with robotic devices fetching and delivering goods of every variety. How do we ensure that they respect the unstated conventions that people unconsciously follow when navigating in crowds? A debate may erupt over whether we should share our turf with machines or banish them to separate facilities. Will it be 'Integrate Our Androids!' or 'Ban the Bots!'
    And far more serious issues are on the horizon. Should it be permissible for an autonomous military robot to select its own targets? The current consensus in the international community is that such weapons should be under 'meaningful human control' at all times, but even this seemingly sensible constraint is ethically muddled. The expanded use of such robots may reduce military and civilian casualties and avoid collateral damage. So how many people's lives should be put at risk waiting for a human to review a robot's time-critical kill decision?
    Even if we can codify our principles and beliefs algorithmically, that won't solve the problem. Simply programming intelligent systems to obey rules isn't sufficient, because sometimes the right thing to do is to break those rules. Blindly obeying a posted speed limit of 55 miles an hour may be quite dangerous, for instance, if traffic is averaging 75, and you wouldn't want your self-driving car to strike a pedestrian rather than cross a double-yellow centerline.
    People naturally abide by social conventions that may be difficult for machines to perceive, much less follow. Finding the right balance between our personal interests and the needs of others—or society in general-is a finely calibrated human instinct, driven by a sense of fairness, reciprocity and common interest. Today's engineers, racing to bring these remarkable devices to market, are ill-prepared to design social intelligence into a machine. Their real challenge is to create civilized robots for a human world.
单选题     Self-driving cars are an example of ______.
 
【正确答案】 D
【答案解析】事实细节题。根据“Self-driving cars”定位到第一段。该段指出,想象着自己坐在未来自动驾驶的汽车里缓慢行驶,你也许会首先考虑它的技术问题,然而更困难的却是在道德上的挑战。因此,正确答案为D选项。
单选题     The robot's controlled selection of its own targets is ethically confusing because ______.
 
【正确答案】 C
【答案解析】事实细节题。根据“selection of its own targets”定位到第四段。该段指出,扩大使用这类机器人也许能减少军队人员和平民的伤亡,并且避免间接伤害。但在如此争分夺秒的时刻,等待人类检查机器的杀人决定会让多少人生命处在危险之中?由此可知D选项错误,C选项正确。
单选题     It can be inferred from Paragraph 5 that ______.
 
【正确答案】 D
【答案解析】推理判断题。第五段指出,就算能用程序编写出习俗公约也无法解决问题。据此可知,用程序编写出我们的习俗公约是可能的,因此C选项错误。文章指出,如果车流平均速度是每小时75英里,盲目遵守55英里的限速规定就会很危险。据此可知,A选项错误。文章还指出,你也不希望自动驾驶的汽车宁可撞到一个行人也不愿违法穿过马路中央的双黄线。据此可知,B选项也是错误的。本段第二句指出,单纯编写出遵守规则的智能系统是不够的,因为有时机器人该做的是打破这些规则。因此,D选项是正确的。
单选题     Which is true according to the last paragraph? ______
 
【正确答案】 C
【答案解析】推理判断题。文章指出,人类本能遵守社会公约而非倾向于追求智能机器。据此可知A选项不正确。文章还指出,如今的工程师,竞争着把这些非凡的自主设备投入市场,却没有把社会智力设计到机器里。据此可知B选项错误。人类约定俗成的社会规则对机器人来说是难以理解的,更别提遵守了,据此可知,D选项错误。从文中“ill-prepared”和“real challenge”可知,C选项为正确答案。
单选题     What is the best title of this text? ______
 
【正确答案】 B
【答案解析】主旨大意题。文章首段指出未来自动驾驶的汽车,技术不是难点,难点在于道德。文章第三段最后,作者提出一个疑问,究竟是“容纳我们的机器人!”还是“禁止机器人!”。文章第四段指出会有更严重的问题出现。第五段指出,就算能用程序编写出我们的习俗公约也无法解决问题。文章最后一段指出,人类约定俗成的社会规则对机器人来说是难以理解的,更别提遵守了。作者通过整篇文章表达了自己对人工智能的态度,即坚决否定,故D选项这种模棱两可的态度首先应排除。文章也并非仅仅回顾人工智能,A选项过于片面,故排除。文章也并不是主要在讲人类面对的挑战,只是表达作者的否定态度,C选项错误。因此,正确答案为B选项。