Deepfake Democracy: Here’s How Modern Elections Could Be Decided by Fake News
“深度伪造”民主:假新闻如何决定现代选举
The emerging threat of deepfakes could have an unprecedented impact on this election cycle, raising serious questions about the integrity of democratic elections, policy-making and our society at large.
A new ethical agenda for AI in political advertising and content on online platforms is required. Given the cross-border nature of the problem, the agenda must be backed by global consensus and action.
Communities and individuals can also take action directly by setting higher standards for how to create and interact with political content online.
不断出现的“深度伪造”威胁可能对2020年的美国总统选举产生前所未有的影响,从而对民主选举、政策制定和整个社会的可信度提出了严重质疑。
我们需要为针对于政治广告和在线平台内容人工智能制定新的“道德议程”。鉴于该问题的跨国性质,道德议程必须获得全球范围内的共识和行动上的支持。
社区和个人还可以通过设置更高的政治内容创作或者互动门槛,直接应对人工智能的威胁。
In a few months the United States will elect its 46th President. While some worry about whether campaigning and casting votes can be done safely during the COVID-19 pandemic, another question is just as critical: how many votes will result via the manipulative influence of artificial intelligence?
几个月后,美国将选举出第四十六届总统。除了人们所担心的新冠病毒大流行期间竞选和投票的安全性的问题,另一个问题也同样至关重要:有多少选票是通过人工智能对选举的操纵性影响产生的?
Specifically, the emerging threat of deepfakes could have an unprecedented impact on this election cycle, raising serious questions about the integrity of elections, policy-making and our democratic society at large.
尤其是,不断出现的“深造伪造”(译者注:deepfake,是英文“深度学习deep learning”和“伪造fake”的混成词,专指基于人工智能的人体图像合成技术的应用)技术威胁,可能会对这一选举周期产生前所未有的影响,从而对选举、政策制定,以及我们整个民主社会的可信度提出严峻挑战。
Understanding deepfakes
了解“深度伪造”技术
AI-powered deepfakes have the potential to bring troubling consequences for the US 2020 elections.
基于人工智能的“深度伪造”技术可能会给美国2020年选举带来令人不安的后果。
The technology that began as little more than a giggle-inducing gimmick for making homebrew mash-up videos has recently been supercharged by advances in AI.
这项技术起初只是用于自制搞笑的“换头”视频,但人工智能的进步另其用途得以扩充。
Today, open sourced software like DeepFaceLab and Faceswap allow virtually anyone with time and access to cloud computing to deploy sophisticated machine learning processes and graphical rendering without any prior development.
如今,像DeepFaceLab和Faceswap这样的开源软件可以使用户随时随地都可以访问云计算,无需进行任何事先开发就能够部署复杂的机器学习过程和图形渲染。
More worryingly, the technology itself is improving at such a rapid pace where experts predict that deepfakes may soon be indistinguishable from real videos. The staggering results that AI can create today can be attributed to herculean leaps in a subfield called Generative Adversarial Networks. This technology enables neural networks to make the jump from mere perception to creation.
更令人担忧的是,技术本身正在以如此之快的速度发展。专家预测,“深度伪造”技术所产生的视频可能很快会接近真实视频。人工智能今天可以创造出惊人的结果,可以归功于子领域“生成对抗网络”(译者注:Generative Adversarial Network,简称GAN,是非监督式学习的一种方法,通过让两个神经网络相互博弈的方式进行学习)的巨大飞跃——这项技术使神经网络能够从单纯的感知过渡到创造。
As one can expect with viral technology, the number of deepfake videos is growing exponentially as the continuing democratization of AI and cloud-computing make the underlying processes more and more accessible.
就像人们对病毒技术所期望的那样,随着人工智能和云计算的不断“民主化”,人们能够更加简单地访问相关流程和技术,“深度伪造”视频的数量也将呈指数级增长。
A new infodemic?
新的信息流行病?
As we have seen during the COVID-19 pandemic, the contagious spread of misinformation rarely requires more than a semblance of authority accompanying the message, no matter how pernicious or objectively unsafe the content may be to the audience.
正如我们在新冠病毒疫情大流行期间所看到的那样,错误信息的病毒性传播很少需要权威的外壳——无论这些内容对听众多有害,或是在客观上多危险。
Given how easily deepfakes can combine fake narratives and information with fabricated sources of authority, they have an unprecedented potential to mislead, misinform and manipulate, giving ‘you won’t believe your eyes’ a wholly new meaning.
“深度伪造”技术可以很容易地将伪造的叙述和信息,与虚假的权威相结合,它们具有误导、误传和操纵信息的空前潜力——“不敢相信自己的眼睛”在“深度伪造”技术下有了全新的含义。
In fact, according to a recent report published by The Brookings Institute, deepfakes are well on their way to not only distort the democratic discourse but also to erode trust in public institutions at large.
实际上,根据布鲁金斯学会(Brookings Institute)最近发布的一份报告,“深度伪造”技术不仅在歪曲民主言论,而且削弱了民众对整个公共机构的信任,而这一切进展得异常顺利。
How can deepfakes become electoral weapons?
“深度伪造”技术如何成为选举武器?
How exactly could deepfakes be weaponized in an election? To begin with, malicious actors could forge evidence to fuel false accusation and fake narratives. For example, by introducing subtle changes to how a candidate delivers an otherwise authentic speech could be used to put character, fitness and mental health into question without most viewers knowing any better.
在选举中为何“深度伪造”技术会成为一种武器?首先,恶意的行为者可能会伪造证据,助长了虚假指控和虚假叙述。例如,通过对候选人发表的原有言语进行微妙改变,使其品格、健康状况和心理健康受到质疑,而大多数观众却完全不知道其中的门道。
Deepfakes could also be used to create entirely new fictitious content, including controversial or hateful statements with the intention of playing upon political divisions, or even inciting violence.
“深度伪造”技术还可用于创建全新的虚拟内容,包括有争议的发言或仇恨言论,目的是操纵政治分歧议题,甚至是煽动暴力。
Perhaps not surprisingly, deepfakes have already been leveraged in other countries to destabilize governments and political processes.
不足为奇,在其他国家及地区,“深度伪造技术”也已被用来破坏政府和政治进程的稳定。
In Gabon, the military launched an ultimately unsuccessful coup after the release of an apparently fake video of leader Ali Bongo suggested that the President was no longer healthy enough to hold office.
在加蓬,军方发动了一场未遂政变。在这场政变之前,他们发布了一条伪造出来的视频,视频中领导人阿里·邦戈(Ali Bongo)暗示总统的身体状况不佳,无法继续任职。
In Malaysia, a video purporting to show the Economic Affairs Minister having sex has generated a considerable debate over whether the video was faked or not, which caused reputational damage for the Minister.
在马来西亚,一个经济事务部长的性爱视频引发了关于该视频是否为深度伪造影片的巨大争论,对其造成了声誉损失。
In Belgium, a political group released a deepfake of the Belgian Prime Minister giving a speech that linked the COVID-19 outbreak to environmental damage and called for drastic action on climate change.
在比利时,一个政治团体放出了比利时总理的深度伪造视频——在该视频中,总理将新冠疫情与环境破坏联系起来,呼吁对气候变化采取严厉行动。
The truth may win
真相会消失吗?
As of today, we are woefully ill-equipped to deal with deepfakes.
迄今为止,我们在应对深度伪造技术方面的能力很差。
According to the Pew Research Center, almost two-thirds of the US population say that fake content creates a great deal of confusion about the political reality. What is worse, even our best efforts to correct and fact check fake content could ultimately serve to only strengthen the spread of faked narrative instead.
皮尤研究中心(Pew Research Center)表示,近三分之二的美国人口认为造假内容为政治现实带来了极大的混乱。更糟糕的是,即使我们尽最大努力纠正和核实虚假内容,最终也只能起到加强虚假叙述传播的作用。
For AI and democracy to coexist, we must urgently secure a common understanding of what is true and create a shared environment for facts from which our diverging opinions can safely emerge.
为了使人工智能和民主共存,我们必须尽快确定一个“什么才是真相”的共识,并为事实创造一个共享平台,让每个人都可以安全地提出不同意见。
What is most desperately needed is a new ethical agenda for AI in political advertising and content on online platforms. Given the cross-border nature of the problem, the agenda must be backed by global consensus and action.
针对政治广告和在线平台内容,我们迫切需要为人工智能制定新的“道德议程”。鉴于该问题的跨国性质,道德议程必须获得全球范围内的共识和行动上的支持。
Initiatives like the World Economic Forum’s Responsible Use of Technology, which bring tech executives together to discuss the ethical use of their platforms, are a strong start.
类似于世界经济论坛所提出的“负责任的技术使用倡议”,会是一个良好的开端。这些倡议将技术主管召集在一起,讨论其平台技术的使用准则。
On the more local level, legislatures have started to follow California’s initiative to ban deepfakes during elections and even Facebook has joined the fight with its own ban on certain forms of manipulated content and a challenge to create technologies to spot them.
在更本地的层面上,立法机关已经开始遵循加利福尼亚州在选举期间禁止使用深度伪造技术的倡议,甚至脸书也推出了自己的禁令,禁止某些形式的虚假内容,并努力创新技术以甄别伪造内容。
The future: fact or fiction?
未来:事实还是虚构?
Still, more can be done.
尽管如此,我们还可以做更多的事情。
We do not necessarily need a technology or regulatory paradigm change in order to disarm deepfakes. Instead, communities and individuals can also take action directly by setting higher standards for how we create and interact with political content online ourselves.
我们不一定需要技术或监管范式的变更,来应对“深度伪造”技术的挑战。取而代之的是,社区和个人还可以通过设置更高的门槛,通过规范线上的政治内容创作和互动,直接应对人工智能的威胁。
In fact, unless voters themselves stand up for facts and truth in online discourse, it will be all but impossible to drive meaningful change, simply because of the inherent subjectivity of online platforms that puts reality at a disadvantage.
实际上,除非选民自己在线上交流中支持事实和真相,否则根本就不可能推动有意义的变革——线上平台固有的主观性让“事实”本身处于不利地位。
Whether we want it or not deepfakes are here to stay. But November 2020 could mark the moment we take a collective stand against the threats AI poses before it’s too late.
无论我们是否希望如此,“深度造假”技术都将存续。但2020年11月的美国总统选举月,我们可以坚定立场,集体行动起来,一同应对人工智能的威胁——现在还不算太晚。
英文、中文版本下载:http://www.yingyushijie.com/shop/source/detail/id/2505.html