Should autonomous vehicles be programmed to choose who they kill when they crash? And who gets access to the code that determines those decisions?
是否应该对自动驾驶汽车进行编程，以便选择在碰撞时杀死谁？谁可以访问决定这些决策的代码？ Google's prototype self-driving car at the Google campus in Mountain View, California. Photograph: Tony Avelar/AP
谷歌在加利福尼亚州山景城谷歌校园的原型自动驾驶汽车。照片：Tony Avelar / AP
The Trolley Problem is an ethical brainteaser that's been entertaining philosophers since it was posed by Philippa Foot in 1967:
A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?
The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.
Now it's found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?
I can't count the number of times I've heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.
Here's a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?
There's an obvious answer, which is the iPhone model. Design the car so that it only accepts software that's been signed by the Ministry of Transport (or the manufacturer), and make it a felony to teach people how to override the lock. This is the current statutory landscape for iPhones, games consoles and many other devices that are larded with digital locks, often known by the trade-name "DRM". Laws like the US Digital Millennium Copyright Act (1998) and directives like the EUCD (2001) prohibit removing digital locks that restrict access to
copyrighted works, and also punish people who disclose any information that might help in removing the locks, such as vulnerabilities in the device.
There's a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car's brains? Why not perform amateur brain surgery on yourself first?
But this obvious answer has an obvious problem: it doesn't work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn't to keep people from reconfiguring their devices -- it's just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn't hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it's clearly legal, you can have your phone unlocked at the newsagent's or even the dry-cleaner's.
但是这个明显的答案有一个明显的问题：它不起作用。出于良好的，众所周知的技术原因，每个锁定的设备都可以轻松越狱。数字锁定规则的主要作用不是阻止人们重新配置他们的设备 - 只是为了确保他们必须在没有业务或产品帮助的情况下这样做。回想一下英国电信监管机构Ofcom在2002年澄清解锁手机的合法性之前的几年;解锁手机并不难。您可以从网上下载软件，或者询问经营非法越狱业务的人。但现在它显然是合法的，你可以将你的手机解锁在报摊上，甚至干洗店。
If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.
But even if we could lock cars' configurations, we shouldn't. A digital lock creates a zone in a computer's programmer that even its owner can't enter. For it to work, the lock's associated files must be invisible to the owner. When they ask the operating system for a list of files in the lock's directory, it must lie and omit those files (because otherwise the user could delete or replace them). When they ask the operating system to list all the running programs, the lock program has to be omitted (because otherwise the user could terminate it).
All computers have flaws. Even software that has been used for years, whose source code has been viewed by thousands of programmers, will have subtle bugs lurking in it. Security is a process, not a product. Specifically, it is the process of identifying bugs and patching them before your adversary identifies them and exploits them. Since you can't be assured that this will happen, it's also the process of discovering when your adversary has found a vulnerability before you and exploited it, rooting the adversary out of your system and repairing the damage they did.
When Sony-BMG covertly infected hundreds of thousands of computers with a digital lock designed to prevent CD ripping, it had to hide its lock from anti-virus software, which correctly identified it as a program that had been installed without the owner's knowledge and that ran against the owner's wishes. It did this by changing its victims' operating systems to render them blind to any file that started with a special, secret string of letters: "$sys$." As soon as this was discovered, other malware writers took advantage of it: when their programs landed on computers that Sony had compromised, the program could hide under Sony's cloak, shielded from anti-virus programs.
当Sony-BMG秘密地使用旨在防止CD翻录的数字锁来感染成千上万台计算机时，它必须隐藏其防病毒软件的锁定，该软件正确地将其识别为在没有所有者知情的情况下安装的程序。违背了主人的意愿。它通过改变受害者的操作系统来实现这一点，使他们对任何以特殊的秘密字母串开头的文件视而不见："$ sys $。"一旦发现这一点，其他恶意软件编写者就会利用它：当他们的程序登陆索尼已经入侵的计算机时，该程序可能隐藏在索尼的斗篷之下，不受反病毒程序的影响。
A car is a high-speed, heavy object with the power to kill its users and the people around it. A compromise in the software that allowed an attacker to take over the brakes, accelerator and steering (such as last summer's
汽车是一种高速，重型物体，能够杀死用户及其周围的人。软件中的折衷方案允许攻击者接管刹车，加速器和转向（例如去年夏天exploit against Chrysler's Jeeps, which triggered a 1.4m vehicle recall) is a nightmare scenario. The only thing worse would be such an exploit against a car designed to have no user-override -- designed, in fact, to treat any attempt from the vehicle's user to redirect its programming as a selfish attempt to avoid the Trolley Problem's cold equations.
，这引发了140万辆汽车召回）是一场噩梦般的场景。唯一更糟糕的是这种对汽车的攻击被设计成没有用户覆盖 - 事实上，设计用于处理来自车辆用户的任何尝试，将其编程重定向为避免手推车问题的冷方程式的自私尝试。
Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries.
That has profound implications beyond the hypothetical silliness of the Trolley Problem. The world of networked equipment is already governed by a patchwork of "lawful interception" rules requiring them to have some sort of back door to allow the police to monitor them. These have been the source of grave problems in computer security, such as the 2011 attack by the Chinese government on the Gmail accounts of suspected dissident activists was executed by exploiting lawful interception; so was the NSA's wiretapping of the Greek government during the 2004 Olympic bidding process.
Despite these problems, law enforcement wants more back doors. The new crypto wars, being fought in the UK through Theresa May's "Snooper's Charter", would force companies to weaken the security of their products to make it possible to surveil their users.
It's likely that we'll get calls for a lawful interception capability in self-driving cars: the power for the police to send a signal to your car to force it to pull over. This will have all the problems of the Trolley Problem and more: an in-built capability to drive a car in a way that its passengers object to is a gift to any crook, murderer or rapist who can successfully impersonate a law enforcement officer to the vehicle -- not to mention the use of such a facility by the police of governments we view as illegitimate -- say, Bashar al-Assad's secret police, or the self-appointed police officers in Isis-controlled territories.
很可能我们会在自动驾驶汽车中接到合法拦截能力的要求：警察向你的汽车发出信号强迫它停车的动力。这将带来手推车问题的所有问题以及更多：以乘客反对的方式驾驶汽车的内置能力是任何骗子，凶手或强奸犯的礼物，他们可以成功冒充执法人员到车辆 - 更不用说我们认为是非法的政府 - 比如巴沙尔·阿萨德的秘密警察，或伊希斯控制地区的自封警察使用这种设施。
That's the thorny Trolley Problem, and it gets thornier: the major attraction of autonomous vehicles for city planners is the possibility that they'll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come -- just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.
这是棘手的手推车问题，它变得越来越棘手：自动驾驶汽车对城市规划者的主要吸引力在于他们可以减少道路上的汽车数量，将私有制的标准改为一种无人驾驶的优步。优步甚至可以被视为一种自主的，不断盘旋的，点对点的车队车辆的干运行，人类代表着机器人的到来 - 就像全球主义和竞争为剥削海外劳务安排铺平了道路一样这反过来又导致更多的自动化和消除许多工业过程中的工人。
If Uber is a morally ambiguous proposition now that it's in the business of exploiting its workforce, that ambiguity will not vanish when the workers go. Your relationship to the car you ride in, but do not own, makes all the problems mentioned even harder. You won't have the right to change (or even monitor, or certify) the software in an Autonom-uber. It
如果优步现在是一个道德模糊的主张，那就是利用其劳动力，那么当工人离开时，这种模糊性就不会消失。你和你乘坐的汽车的关系，但不拥有，使得提到的所有问题变得更加困难。您无权在Autonom-uber中更改（甚至监控或认证）该软件。它 will be designed to let third parties (the fleet's owner) override it. It may have a user override (Tube trains have passenger-operated emergency brakes), possibly mandated by the insurer, but you can just as easily see how an insurer would prohibit such a thing altogether.
Forget trolleys: the destiny of self-driving cars will turn on labour relationships, surveillance capabilities, and the distribution of capital wealth.