阅读理解
Passage one
Questions 46 to 50 are based on the following passage.
As Artificial Intelligence(AI)becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language.
For example, if a robot does chores around the house, you wouldn’t want it to put the pet cat in the oven to make dinner for the hungry children.“You would want that robot preloaded with a good set of values,”said Russell.
Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn’t think that’s the kind of thing a properly brought-up person would do.
It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior.They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to so sufficient testing and they’ve produced a system that will break some kind of taboo(禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren’t quite sure about a decision, we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.
单选题
What does the author say about the threat of robots?
【正确答案】
C
【答案解析】[解析] 根据题干关键词the threat of robots,可将答案定位在文章第一段首句。
文章第一段首句指出:“随着人工智能变得越来越高端,越来越多的人开始担心机器人会变成一种威胁。”随后第二句进一步指出,计算机科学教授拉塞尔认为,如果我们弄清楚如何将人类的价值观变成一种可编程的代码的话,就可以避免这种危险。将价值观编成计算机程序代码就是第二段中提到的将价值观翻译成人工智能语言,将故本题选C。选项A文章中没有提及,可直接排除。选项B错在,原文是说,随着高科技机器的出现,人们越来越担心机器人成为威胁,并不是说这种威胁会伴随着高科技机器出现,故排除。选项D与原文表达的意思相反,也可以排除。
单选题
What would we think of a person who invades our personal space according to the author?
【正确答案】
D
【答案解析】[解析] 根据题干关键词personal space,可将答案定位在文章第四段最后一句。
文章第四段最后一句指出:“很显然,会存在文化的不同,但是如果你正在和另外一个人说话,然后他向你靠近,进入了你的个人空间的话,你就会认为那不是一个教养良好的人会做的事情。”故本题选D。要注意,原文中的you wouldn"t think that"s the kind of thing a properly brought-up person would do不太好理解,当你把前面主句中的not移到后面从句中就明白了,其实这句话说的是,that"s the kind of thing a properly brought-up person would not do,文中a properly brought-up person would not对应选项中的ill-bred。bred是breed(养育)一词的过去分词形式,ill表示“坏;恶劣”,ill-bred意为“家教不好的”。
单选题
How do robots learn human values?
【正确答案】
C
【答案解析】[解析] 根据题干关键词robots learn human values,可将答案定位在文章第六段第一句。
文章第六段第一句指出:“通过处理与人类行为有关的大量数据,绘制其模式,机器人也可以学会人类的价值观。”由此可知,机器人可以通过学习人类的行为模式来学习人类的价值观,故本题选C。选项A和B原文没有提及,可以排除。选项D错在文章并没有指出这种学习是通过“模仿”完成的,故也排除。
单选题
What will a well-programmed robot do when facing an unusual situation?
【正确答案】
B
【答案解析】[解析] 根据题干关键词facing an unusual situation,可将答案定位在文章第八段。
文章第八段指出:“一种很简单的检测方法是,编程设定当一个机器人遇到了不同寻常的情况时,它要与人类确认正确的行为是什么。”第九段也用例子进一步解释了这句话。当机器人不确定一种动物是否适合放到微波炉里时,它应该停下来,发出嘟嘟声,然后问人类它该怎么做。由此可知,机器人遇到不同寻常的情况时,要停下来,询问人类的意见,故本题选B。
单选题
What is most difficult to do when we turn human values into a programmable code?
【正确答案】
A
【答案解析】[解析] 根据题干关键词most difficult to do和turn human values into a programmable code,可将答案定位在文章最后一段第一句。
文章第最后一段第一句指出:“在将价值观编程时,最难的一步将在于确定我们究竟认为什么才是道德的,以及如何创建一套道德规范。”由此可知,要想将价值观编成代码,最难的是确定到底道德规范是什么以及如何确定一系列规范。对比各个选项可知,A项正确。其他三个选项原文没有提及,均可排除。