阅读理解
As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges. But the more difficult challenges may have to do with ethics. Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Serf-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute. Within the next few decades, our stores, streets and sidewalks will likely be crammed with robotic devices fetching and delivering goods of every variety. How do we ensure that they respect the unstated conventions that people unconsciously follow when navigating in crowds? A debate may erupt over whether we should share our turf with machines or banish them to separate facilities. Will it be 'Integrate Our Androids!' or 'Ban the Bots!' And far more serious issues are on the horizon. Should it be permissible for an autonomous military robot to select its own targets? The current consensus in the international community is that such weapons should be under 'meaningful human control' at all times, but even this seemingly sensible constraint is ethically muddled. The expanded use of such robots may reduce military and civilian casualties and avoid collateral damage. So how many people's lives should be put at risk waiting for a human to review a robot's time-critical kill decision? Even if we can codify our principles and beliefs algorithmically, that won't solve the problem. Simply programming intelligent systems to obey rules isn't sufficient, because sometimes the right thing to do is to break those rules. Blindly obeying a posted speed limit of 55 miles an hour may be quite dangerous, for instance, if traffic is averaging 75, and you wouldn't want your self-driving car to strike a pedestrian rather than cross a double-yellow centerline. People naturally abide by social conventions that may be difficult for machines to perceive, much less follow. Finding the right balance between our personal interests and the needs of others—or society in general-is a finely calibrated human instinct, driven by a sense of fairness, reciprocity and common interest. Today's engineers, racing to bring these remarkable devices to market, are ill-prepared to design social intelligence into a machine. Their real challenge is to create civilized robots for a human world.