阅读理解 Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threatens to be a problem for humanity—probably an existential problem.
He joins Elon Musk, Bill Gates, Stephen Hawking and others who have come out to declare the dangers of AI. The difference is, unlike those scientists and technologists, the former secretary of State speaks with great authority to a wider audience that includes policy makers and political leaders, and so could have a much greater influence.
And that's not a good thing. There's a widespread lack of precision in how we describe AI that is giving rise to a significant apprehension on its use in self-driving cars, automated farms, drone airplanes and many other areas where it could be extremely useful. In particular, Kissinger commits the same error many people do when talking about AI: the so-called conflation error. In this case the error comes about when the success of AI programs in defeating humans in games such as chess and go are conflated with similar successes that might be achieved with AI programs used in supply chain management or claims adjustments or other, more futuristic areas.
But the two situations are very different. The rules of games like chess and go are prescriptive, somewhat complicated and never change. They are, in the context of AI, "well bounded." A book teaching chess or go written 100 years ago is still relevant today. Training an AI to play one of these games takes advantage of this "boundedness" in a variety of interesting ways, including letting the AI decide how it will play.
Now, however, imagine the rules of chess could change randomly at any time in any location: Chess on Tuesdays in Chicago has one set of rules but in Moscow there are a different set of rules on Thursdays. Chess players in Mexico use a completely different board, one for each month of the year. In Sweden the role for each piece can be decided by a player even after the game starts. In a situation like this it's obviously impossible to write down a single set rules that everyone can follow at all times in all locations.
AI is today being applied to business systems like claims and supply chains that, by their very nature, are unbounded. It is impossible to write down all the rules an AI has to follow when adjudicating an insurance claim or managing the supply chain, even for something as simple as bubblegum. The only way to train an AI to manage one of these is to feed it massive amounts of data on all the myriad processes and companies that make up an insurance claim or a simple supply chain. We then hope the AI can do the job—not just efficiently, but also ethically.
单选题 36.Kissinger's words exert greater influence because of______.
【正确答案】 C
【答案解析】事实细节题。根据定位词定位到文章第二段。原文指出,与那些科学家和技术专家不同的是,这位前国务卿对包括政策制定者和政治领袖在内的更广泛的听众具有极大的权威性,因此可以产生更大的影响力,prestige为authority的同义替换词,故C项为正确选项。
单选题 37.People worried about AI's use in some important fields because______.
【正确答案】 D
【答案解析】事实细节题。根据定位词定位到文章第三段。题干中的worried about为apprehension的同义替换,fields为areas的同义替换。原文指出,我们描述人工智能的方式普遍缺乏精确性,这引起了人们对于人工智能在自动驾驶汽车、自动化农场、无人驾驶飞机以及许多其他领域的使用的严重担忧,故D项为正确选项。
单选题 38.Training an AI to play chess or go makes full use of______.
【正确答案】 B
【答案解析】事实细节题。根据定位词定位到文章第四段。原文指出,训练人工智能去参加这些比赛中的一项比赛,并以各种有趣的方式来利用这种“有界性”,包括让人工智能决定它将如何进行比赛,这种“有界性”指的就是比赛规则的固定性,故B项为正确选项。
单选题 39.What can we learn from Paragraph 5?
【正确答案】 A
【答案解析】推理判断题。根据定位词定位到文章第五段。第五段从反面论证了第四段的观点,即不可能写下所有人在任何时间、任何地点都可以遵循的单一规则,指的是“无界性”,故A项为正确选项。
单选题 40.To make AI manage better, the business department should______.
【正确答案】 C
【答案解析】事实细节题。根据定位词定位到文章第六段。题干中的make AI manage better对应文中train an AI to manage one of these,原文指出,训练人工智能来管理其中一项规则的唯一方法是,向它提供大量的关于无数流程和公司的数据,而这些数据构成一项保险索赔或一条简单的供应链,故C项为正确选项。