阅读理解
This year marks exactly two centuries since the publication of Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow many ethical questions to be raised by technologies yet to come. Today the rapid growth of artificial intelligence (AI) raises fundamental questions: 'What is intelligence, identity, or consciousness? What makes humans humans?' What is being called artificial general intelligence, machines that would imitate the way humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as 'Westworld' and 'Humans'. Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist. 'We are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.' But that doesn't mean crucial ethical issues involving AI aren't at hand. The coming use of autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. AI 'vision' today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem. Whenever decisions are based on masses of data, 'you quickly get into a lot of ethical questions,' notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI. Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring. On June 7 Google pledged not to 'design or deploy AI' that would cause 'overall harm,' or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights. While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair. To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity's highest values? Only then will they be useful servants and not Frankenstein's out-of-control monster.
单选题
Mary Shelley's novel Frankenstein is mentioned because it ______
【正确答案】
B
【答案解析】细节题。根据题干Mary Shelley's novel Frankenstein首先定位到第一段第一句。抛砖引玉的手法引出的文章主题在“砖”(本题即novel Frankenstein)之后,故可定位到本段第二句,由此可知,此书的意义在于预估了科技导致的伦理问题,对应B项“如今人工智能引发一些担忧”。其中第二句中remarkable work指代novel Frankenstein,ethical questions对应B项中some concerns,be raised by technologies对应B项中raised by AI。A项“让全世界的人工智能科学家着迷”无中生有,其利用远离定位位置的第三段第二句中fascinated进行干扰。C项“流行了200多年”就事论事,200年与第一段第一句two centuries同义替换,但却不是要引出的文章主旨。D项“引发了严重的伦理争议”偷换概念,原文为foreshadow,指小说预示了科技导致的伦理问题,而不是spark“激发”出严重的伦理争议。故本题选B。 本文选自The Christian Science Monitor《基督教科学箴言报》2018年7月2日的一篇文章,原文标题为AI Can Have Values If Not a Conscience(如果没有良知,人工智能也可以有价值)。本文开篇通过引用玛丽·雪莱创作《科学怪人》这部作品引出主题:人工智能。主要阐述了人工智能存在的一些问题,人类研究涉及的人工智能的关键伦理问题一直存在,要确保智能机器的思维反映了人类的最高价值,而不触及伦理问题还有很长的路需要探索。 [参考译文] AI Can Have Values If Not a Conscience (编者加) 如果没有良知,人工智能也可以有价值 今年,距玛丽·雪莱的《科学怪人》,又名《现代普罗米修斯》的出版正好两个世纪。甚至在电灯泡发明之前,作者就已经创作了一部引人注目的推理小说,它预示着未来的技术将引发许多伦理问题。 今天,人工智能(AI)的快速发展提出了一些基本问题:“什么是智能、身份或意识?是什么使人类成为人类?” 所谓的通用人工智能,即能模仿人类思维方式的机器,一直困惑着科学家。然而,人类仍然对机器人的外观、动作和反应与人类相似的想法感到着迷,就像最近热门科幻电视剧《西部世界》和《人类》中所描绘的那样。 人们的思维方式仍然非常复杂且难以理解,更不用说复制了。斯坦福大学的神经系统科学家戴维·伊格曼表示,“我们现在处于这样的境地,即没有很好的理论来解释什么是意识,以及如何制造一台机器来实现这一目标”。 但这并不意味着涉及人工智能的关键伦理问题就不存在。例如,即将投入使用的自动驾驶汽车引发了棘手的伦理问题。有时人类司机必须在瞬间做出决定,他们的反应可能是瞬间的反应、过去驾驶经验的积累,以及他们的眼耳在那一刻告诉他们的信息的复杂组合。自动驾驶汽车的人工智能“视野”不如人类视野精密,并且驾驶情景的预估能力难以编程。 新加坡一家机构的首席执行官Tan Kiat How指出,每当决策基于大量数据时,“你很快就会遇到许多伦理问题”。这家机构正在帮助政府开发一套关于人工智能伦理使用的自愿守则。与新加坡一样,其他国家的政府和大型企业也开始制定自己的指导方针。英国正在建立数据伦理中心。今年春天,印度发布了人工智能伦理战略。 6月7日,谷歌承诺不会设计或部署会造成“全面伤害”的人工智能,也不会将人工智能用于研发武器,或用于违反国际准则的监视。它还承诺不会部署违反国际法或人权的人工智能。 虽然这种说法含糊不清,但它代表了一个起点。人工智能系统做出的决策应该是可解释的、透明的和公平的,这一观点也是如此。 换句话说:我们如何确保智能机器的思维反映了人类的最高价值?只有到那时,他们才会成为有用的仆人,而不是类似弗兰肯斯坦那样失控的怪物。
单选题
In David Eagleman's opinion, our current knowledge of consciousness ______
【正确答案】
A
【答案解析】细节题。根据题干David Eagleman定位到第四段第一句Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman...(人们的思维方式仍然非常复杂而难以理解,更不用说复制了),对应A项“我们当下对于意识的了解十分有限,难以复制意识”,故为正确答案。B项“会误导机器人的制作”偷换概念,原文第四段第二句为:没有好的理论去解释如何制造机器,B项偷换成:我们当下对意识的了解会误导机器人的制作。C项“帮助解释人工智能”无中生有,原文未提及帮助解释人工智能。D项“启发流行的科幻电视剧”移花接木,利用第三段第二句中的sci-fi TV series进行干扰。故本题选A。
单选题
The solution to the ethical issues brought by autonomous vehicles ______
【正确答案】
C
【答案解析】细节题。根据题干ethical issues和autonomous vehicles定位到第五段第一、二句。题干就自动驾驶汽车所引发的伦理问题的解决方法(solution)进行提问,需定位到下文第五、六句AI 'vision' today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem. (自动驾驶汽车的人工智能“视野”不如人类视野精密,并且驾驶情景的预估能力难以编程),由此可知目前仍难以解决这一伦理问题,故C项正确。A项“引起较少的社会关注”与D项“引起较大的好奇心”均属于无中生有,原文未提及。B项“几乎从未被发现”,表义过于绝对,原文提及解决方法,只是目前难以实现。故本题选C。
单选题
The author's attitude toward Google's pledges is one of ______
【正确答案】
D
【答案解析】态度题。根据题干Google's pledges定位到第七段。作者态度并未在第七段中体现,故定位到第八段第一句While the statement is vague, it represents one starting point. The statement(声明)指代前文中Google's pledges,句意为:尽管这种说法含糊不清,但它代表了一个起点。可见作者对于Google's pledges持肯定态度,故D项“肯定”正确。此外,根据态度题的解题技巧,C项contempt“轻蔑”可先排除。A项respect“尊重”原文未体现。B项skepticism“怀疑”与文中感情色彩相反,可排除。故本题选D。
单选题
Which of the following would be the best title for the text? ______
【正确答案】
B
【答案解析】主旨题。文章中心可定位到第二段Today the rapid growth of artificial intelligence (AI) raises fundamental questions: 'what is intelligence, identity, or consciousness? What makes humans humans?'(今天,人工智能的快速发展提出了一些基本问题:“什么是智能、身份或意识?是什么使人类成为人类?”),且最后一段提及与intelligence,identity,or consciousness相关的伦理问题,故B项正确。A项“科技巨头操控人工智能的未来”,无中生有。C项“《科学怪人》,预言人工智能时代的小说”本末倒置,文章开头提及小说是为了引出文章主旨,其本身并不是文章中心。D项“一旦失去控制,人工智能就会成为杀手”以偏概全,out of control仅仅出现在文章最后一句Only then will they be useful servants and not Frankenstein's out-of-control monster. (只有到那时,他们才会成为有用的仆人,而不是类似弗兰肯斯坦那样失控的怪物。),故D项错误。故本题选B。