中英文模式阅读
中文模式阅读
英文模式阅读

Should autonomous vehicles be programmed to choose who they kill when they crash? And who gets access to the code that determines those decisions?
是否应该对自动驾驶汽车进行编程,以便选择在碰撞时杀死谁?谁可以访问决定这些决策的代码?Google’s prototype self-driving car at the Google campus in Mountain View, California Google's prototype self-driving car at the Google campus in Mountain View, California. Photograph: Tony Avelar/AP
谷歌在加利福尼亚州山景城谷歌校园的原型自动驾驶汽车。照片:Tony Avelar / AP

The Trolley Problem is an ethical brainteaser that's been entertaining philosophers since it was posed by Philippa Foot in 1967:
手推车问题是一种道德的脑力激荡器,自1967年Philippa Foot提出以来一直在招待哲学家:

A runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?
一辆失控的火车将屠杀与其轨道相连的五名无辜者,除非你拉动杠杆将其转换为一个人,也是无辜和不知情的人站在旁边。拉动杠杆,你拯救了五个,但杀了一个:道德行为是什么?

The problem has run many variants over time, including ones in which you have to choose between a trolley killing five innocents or personally shoving a man who is fat enough to stop the train (but not to survive the impact) into its path; a variant in which the fat man is the villain who tied the innocents to the track in the first place, and so on.
随着时间的推移,这个问题已经出现了很多变化,其中包括你必须选择一辆手推车杀死五名无辜者或亲自推倒一个足够胖的人来阻止火车(但不能在撞击中幸存)进入其路径;一个变体,其中胖子是将无辜者首先捆绑在轨道上的恶棍,依此类推。

Now it's found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?
现在它在自动驾驶汽车的争论中找到了新的生命。新的变种是这样的:你的自动驾驶汽车意识到它可以以一种会杀死你的方式转移自己,并节省一大堆儿童;或者它可以犁过并拯救你,但孩子们都死了。它应该编程做什么?

I can't count the number of times I've heard this question posed as chin-stroking, far-seeing futurism, and it never fails to infuriate me. Bad enough that this formulation is a shallow problem masquerading as deep, but worse still is the way in which this formulation masks a deeper, more significant one.
我无法计算我听到这个问题作为下巴抚摸,远见的未来主义的次数,它永远不会激怒我。这个配方很容易伪装成一个浅层问题,但更糟糕的是这个配方掩盖了更深层次,更重要的配方。

Here's a different way of thinking about this problem: if you wanted to design a car that intentionally murdered its driver under certain circumstances, how would you make sure that the driver never altered its programming so that they could be assured that their property would never intentionally murder them?
这是一个不同的思考这个问题的方法:如果你想设计一辆故意在某些情况下谋杀其司机的汽车,你将如何确保司机从未改变其编程,以便他们可以确保他们的财产永远不会故意谋杀他们?

There's an obvious answer, which is the iPhone model. Design the car so that it only accepts software that's been signed by the Ministry of Transport (or the manufacturer), and make it a felony to teach people how to override the lock. This is the current statutory landscape for iPhones, games consoles and many other devices that are larded with digital locks, often known by the trade-name "DRM". Laws like the US Digital Millennium Copyright Act (1998) and directives like the EUCD (2001) prohibit removing digital locks that restrict access to
有一个明显的答案,就是iPhone型号。设计汽车,使其只接受运输部(或制造商)签署的软件,并将重要性告诉人们如何超越锁定。这是iPhone,游戏机和许多其他设备的法定环境,这些设备都是数字锁,通常以商品名"DRM"而闻名。像美国数字千年版权法案(1998)和EUCD(2001)等指令禁止删除限制访问的数字锁
copyrighted works, and also punish people who disclose any information that might help in removing the locks, such as vulnerabilities in the device.
受版权保护的作品,并惩罚那些披露可能有助于删除锁定的信息的人,例如设备中的漏洞。

There's a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car's brains? Why not perform amateur brain surgery on yourself first?
这有一个很有力的论据。自动驾驶汽车的编程将负责一个高速,移动的物体,在柔软和脆弱的人类中居住在公共道路上。修补汽车的大脑?为什么不首先对自己进行业余脑部手术?

But this obvious answer has an obvious problem: it doesn't work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn't to keep people from reconfiguring their devices -- it's just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn't hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it's clearly legal, you can have your phone unlocked at the newsagent's or even the dry-cleaner's.
但是这个明显的答案有一个明显的问题:它不起作用。出于良好的,众所周知的技术原因,每个锁定的设备都可以轻松越狱。数字锁定规则的主要作用不是阻止人们重新配置他们的设备 - 只是为了确保他们必须在没有业务或产品帮助的情况下这样做。回想一下英国电信监管机构Ofcom在2002年澄清解锁手机的合法性之前的几年;解锁手机并不难。您可以从网上下载软件,或者询问经营非法越狱业务的人。但现在它显然是合法的,你可以将你的手机解锁在报摊上,甚至干洗店。

If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.
如果我们确信没有制造商批准没有人可以重新配置它们,那么自动驾驶汽车只能是安全的,那么它们将永远不会安全。

But even if we could lock cars' configurations, we shouldn't. A digital lock creates a zone in a computer's programmer that even its owner can't enter. For it to work, the lock's associated files must be invisible to the owner. When they ask the operating system for a list of files in the lock's directory, it must lie and omit those files (because otherwise the user could delete or replace them). When they ask the operating system to list all the running programs, the lock program has to be omitted (because otherwise the user could terminate it).
但即使我们可以锁定汽车的配置,我们也不应该。数字锁在计算机程序员中创建一个区域,即使其所有者也无法进入该区域。要使其工作,锁的相关文件必须对所有者不可见。当他们向操作系统询问锁定目录中的文件列表时,它必须说谎并省略这些文件(因为否则用户可以删除或替换它们)。当他们要求操作系统列出所有正在运行的程序时,必须省略锁定程序(因为否则用户可以终止它)。

All computers have flaws. Even software that has been used for years, whose source code has been viewed by thousands of programmers, will have subtle bugs lurking in it. Security is a process, not a product. Specifically, it is the process of identifying bugs and patching them before your adversary identifies them and exploits them. Since you can't be assured that this will happen, it's also the process of discovering when your adversary has found a vulnerability before you and exploited it, rooting the adversary out of your system and repairing the damage they did.
所有电脑都有缺陷。即使已经使用多年的软件,其源代码已经被成千上万的程序员所看到,也会有潜伏在其中的微妙错误。安全是一个过程,而不是一个产品。具体来说,它是在您的对手识别并利用它们之前识别错误并修补它们的过程。由于您无法确定会发生这种情况,因此它也是一个发现您的对手何时在您面前发现并利用它,将对手从系统中剔除并修复其造成的损害的过程。

When Sony-BMG covertly infected hundreds of thousands of computers with a digital lock designed to prevent CD ripping, it had to hide its lock from anti-virus software, which correctly identified it as a program that had been installed without the owner's knowledge and that ran against the owner's wishes. It did this by changing its victims' operating systems to render them blind to any file that started with a special, secret string of letters: "$sys$." As soon as this was discovered, other malware writers took advantage of it: when their programs landed on computers that Sony had compromised, the program could hide under Sony's cloak, shielded from anti-virus programs.
当Sony-BMG秘密地使用旨在防止CD翻录的数字锁来感染成千上万台计算机时,它必须隐藏其防病毒软件的锁定,该软件正确地将其识别为在没有所有者知情的情况下安装的程序。违背了主人的意愿。它通过改变受害者的操作系统来实现这一点,使他们对任何以特殊的秘密字母串开头的文件视而不见:"$ sys $。"一旦发现这一点,其他恶意软件编写者就会利用它:当他们的程序登陆索尼已经入侵的计算机时,该程序可能隐藏在索尼的斗篷之下,不受反病毒程序的影响。

A car is a high-speed, heavy object with the power to kill its users and the people around it. A compromise in the software that allowed an attacker to take over the brakes, accelerator and steering (such as last summer's
汽车是一种高速,重型物体,能够杀死用户及其周围的人。软件中的折衷方案允许攻击者接管刹车,加速器和转向(例如去年夏天exploit against Chrysler's Jeeps, which triggered a 1.4m vehicle recall) is a nightmare scenario. The only thing worse would be such an exploit against a car designed to have no user-override -- designed, in fact, to treat any attempt from the vehicle's user to redirect its programming as a selfish attempt to avoid the Trolley Problem's cold equations.
,这引发了140万辆汽车召回)是一场噩梦般的场景。唯一更糟糕的是这种对汽车的攻击被设计成没有用户覆盖 - 事实上,设计用于处理来自车辆用户的任何尝试,将其编程重定向为避免手推车问题的冷方程式的自私尝试。

Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries.
无论我们对自动驾驶汽车会遇到什么问题,他们都会因为设计它们以将乘客当作对手而变得更糟。

That has profound implications beyond the hypothetical silliness of the Trolley Problem. The world of networked equipment is already governed by a patchwork of "lawful interception" rules requiring them to have some sort of back door to allow the police to monitor them. These have been the source of grave problems in computer security, such as the 2011 attack by the Chinese government on the Gmail accounts of suspected dissident activists was executed by exploiting lawful interception; so was the NSA's wiretapping of the Greek government during the 2004 Olympic bidding process.
除了小车问题的假设愚蠢之外,这还有深远的影响。网络设备的世界已经被一系列"合法拦截"规则所支配,要求他们有某种后门让警方监控他们。这些都是计算机安全问题的根源,例如2011年中国政府对涉嫌持不同政见活动家的Gmail帐户的攻击是通过利用合法拦截来执行的;美国国家安全局在2004年奥运会招标过程中窃听希腊政府也是如此。

Despite these problems, law enforcement wants more back doors. The new crypto wars, being fought in the UK through Theresa May's "Snooper's Charter", would force companies to weaken the security of their products to make it possible to surveil their users.
尽管存在这些问通过Theresa May的"Snooper宪章"在英国进行的新加密战争将迫使公司削弱其产品的安全性,以便能够监控用户。

It's likely that we'll get calls for a lawful interception capability in self-driving cars: the power for the police to send a signal to your car to force it to pull over. This will have all the problems of the Trolley Problem and more: an in-built capability to drive a car in a way that its passengers object to is a gift to any crook, murderer or rapist who can successfully impersonate a law enforcement officer to the vehicle -- not to mention the use of such a facility by the police of governments we view as illegitimate -- say, Bashar al-Assad's secret police, or the self-appointed police officers in Isis-controlled territories.
很可能我们会在自动驾驶汽车中接到合法拦截能力的要求:警察向你的汽车发出信号强迫它停车的动力。这将带来手推车问题的所有问题以及更多:以乘客反对的方式驾驶汽车的内置能力是任何骗子,凶手或强奸犯的礼物,他们可以成功冒充执法人员到车辆 - 更不用说我们认为是非法的政府 - 比如巴沙尔·阿萨德的秘密警察,或伊希斯控制地区的自封警察使用这种设施。

That's the thorny Trolley Problem, and it gets thornier: the major attraction of autonomous vehicles for city planners is the possibility that they'll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come -- just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.
这是棘手的手推车问题,它变得越来越棘手:自动驾驶汽车对城市规划者的主要吸引力在于他们可以减少道路上的汽车数量,将私有制的标准改为一种无人驾驶的优步。优步甚至可以被视为一种自主的,不断盘旋的,点对点的车队车辆的干运行,人类代表着机器人的到来 - 就像全球主义和竞争为剥削海外劳务安排铺平了道路一样这反过来又导致更多的自动化和消除许多工业过程中的工人。

If Uber is a morally ambiguous proposition now that it's in the business of exploiting its workforce, that ambiguity will not vanish when the workers go. Your relationship to the car you ride in, but do not own, makes all the problems mentioned even harder. You won't have the right to change (or even monitor, or certify) the software in an Autonom-uber. It
如果优步现在是一个道德模糊的主张,那就是利用其劳动力,那么当工人离开时,这种模糊性就不会消失。你和你乘坐的汽车的关系,但不拥有,使得提到的所有问题变得更加困难。您无权在Autonom-uber中更改(甚至监控或认证)该软件。它 will be designed to let third parties (the fleet's owner) override it. It may have a user override (Tube trains have passenger-operated emergency brakes), possibly mandated by the insurer, but you can just as easily see how an insurer would prohibit such a thing altogether.
旨在让第三方(车队的所有者)覆盖它。它可能有一个用户覆盖(管列车有乘客操作的紧急制动器),可能由保险公司强制要求,但你可以很容易地看到保险公司如何完全禁止这样的事情。

Forget trolleys: the destiny of self-driving cars will turn on labour relationships, surveillance capabilities, and the distribution of capital wealth.
忘掉手推车:自动驾驶汽车的命运将打开劳动关系,监视能力和资本财富分配。

中英文模式阅读
中文模式阅读
英文模式阅读

查看英文原文

查看更多文章

公众号:银河系1号
公众号:银河系1号

联系邮箱:public@space-explore.com
联系邮箱:public@space-explore.com

(未经同意,请勿转载)
(未经同意,请勿转载)