英语演讲 学英语,练听力,上听力课堂! 注册 登录
> 英语演讲 > 英语演讲mp3 > TED音频 >  第109篇

演讲MP3+双语文稿:如何把互联网变成一个值得信赖的地方?

所属教程:TED音频

浏览:

2022年05月07日

手机版
扫描二维码方便学习和分享
https://online2.tingclass.net/lesson/shi0529/10000/10387/tedyp110.mp3
https://image.tingclass.net/statics/js/2012

听力课堂TED音频栏目主要包括TED演讲的音频MP3及中英双语文稿,供各位英语爱好者学习使用。本文主要内容为演讲MP3+双语文稿:如何把互联网变成一个值得信赖的地方?,希望你会喜欢!

【演讲者及介绍】Claire Wardle

克莱尔·沃德尔是用户生成内容和验证方面的专家,她的工作是帮助提高在线信息的质量。

【演讲主题】如何帮助把互联网变成一个值得信赖的地方?

【中英文字幕】

翻译者Nan Yang 校对者Jiasi Hao

00:01

No matter who you are or where you live,I'm guessing that you have at least one relative that likes to forward thoseemails. You know the ones I'm talking about -- the ones with dubious claims orconspiracy videos.

无论你是谁或者住在哪,我猜你们身边至少有一位喜欢转发那些电子邮件的亲戚。你们知道我说的那些邮件是带有可疑声明或录像的邮件。

00:48

And if you spend as enough time as I havelooking at misinformation, you know that this is just one example of many thattaps into people's deepest fears and vulnerabilities.

而且如果你们花费和我一样多的时间来看这个误传,你们会知道这只是众多利用人们最深层的恐惧和脆弱的例子之一。

01:11

Every day, across the world, we see scoresof new memes on Instagram encouraging parents not to vacte their children.We see new videos on YouTube explaining that climate change is a hoax. Andacross all platforms, we see endless posts designed to demonize others on thebasis of their race, religion or sexuality.

每一天,在世界范围内,我们都能看到Ins上出现的大量新表情包在鼓励父母不要给孩子接种疫苗。我们看到YoutTube上的新视频在解释说气候变化问题是个骗局。在所有平台上,我们看见了因为种族、宗教或性取向的不同,而妖魔化他人的无穷无尽的帖子。

01:32

Welcome to one of the central challenges ofour time. How can we maintain an internet with freedom of expression at thecore, while also ensuring that the content that's being disseminated doesn'tcause irreparable harms to our democracies, our communities and to our physicaland mental well-being? Because we live in the information age, yet the centralcurrency upon which we all depend -- information -- is no longer deemedentirely trustworthy and, at times, can appear downright dangerous. This isthanks in part to the runaway growth of social sharing platforms that allow usto scroll through, where lies and facts sit side by side, but with none of thetraditional signals of trustworthiness.

欢迎来到我们这个时代的主要挑战之一。我们如何在维护互联网核心的同时,也能确保正在传播的内容不会对我们的民主、我们的社区和我们的身心健康造成不可弥补的伤害?因为我们生活在信息时代,但是我们所有人都依赖的中央货币——信息——不再完全值得信赖,而且有时会显得非常危险。这部分“归功”于社交共享媒体的迅猛发展,让我们可以在谎言和真相并存的世界里滑屏浏览,但是没有任何传统的可信赖的信号。

02:12

And goodness -- our language around this ishorribly muddled. People are still obsessed with the phrase "fakenews," despite the fact that it's extraordinarily unhelpful and used todescribe a number of things that are actually very different: lies, rumors,hoaxes, conspiracies, propaganda. And I really wish we could stop using aphrase that's been co-opted by politicians right around the world, from theleft and the right, used as a weapon to attack a free and independent press.

天啊——我们关于这方面的语言极其混乱。人们仍然沉迷于“假新闻”这一短语,尽管事实是,该短语毫无帮助,并且被用来描述一系列实际上非常不同的东西:谎言、谣言、恶作剧、阴谋、宣传鼓吹……我非常希望我们可以停止使用这一世界各地的政治家共同选择的一个短语,停止将它作为武器来攻击自由和独立的新闻媒体。

02:40

(Applause)

(掌声)

02:45

Because we need our professional news medianow more than ever. And besides, most of this content doesn't even masqueradeas news. It's memes, videos, social posts. And most of it is not fake; it'smisleading. We tend to fixate on what's true or false. But the biggest concernis actually the weaponization of context. Because the most effectivedisinformation has always been that which has a kernel of truth to it.

因为我们比以往任何时候都更需要我们的专业新闻媒体。而且除此之外,大多数内容甚至都没有被伪装成新闻,而是以表情包、视频、社交帖子的形式存在。而且大部分内容不是假的,而是误导。我们倾向于专注在真假上。但是最大的担忧实际上是语境的武器化。因为最有效的虚假信息一直是具有真实内核的那些。

04:36

So I'd like to explain three interlockingissues that make this so complex and then think about some ways we can considerthese challenges. First, we just don't have a rational relationship toinformation, we have an emotional one. It's just not true that more facts willmake everything OK, because the algorithms that determine what content we see,well, they're designed to reward our emotional responses. And when we'refearful, oversimplified narratives, conspiratorial explanations and languagethat demonizes others is far more effective. And besides, many of thesecompanies, their business model is attached to attention, which means thesealgorithms will always be skewed towards emotion.

所以我想解释一下让这事变得如此复杂的三个环环相扣的问题,然后琢磨出些方法来思考这些挑战。首先,我们只是没有与信息建立起理性关系,我们的关系是感性的。并非更多的事实真相会让一切顺利,因为决定我们能看见什么内容的算法,是被设计来奖励我们的情感反应。而当我们恐惧时,过于简化的叙述、阴谋论解释,和妖魔化事物的语言更加有效。此外,有很多公司,他们的商业模式与人们的关注度息息相关,这意味着这些算法总是会偏向情感。

05:18

Second, most of the speech I'm talkingabout here is legal. It would be a different matter if I was talking aboutchild sexual abuse imagery or content that incites violence. It can beperfectly legal to post an outright lie. But people keep talking about takingdown "problematic" or "harmful" content, but with no cleardefinition of what they mean by that, including Mark Zuckerberg, who recentlycalled for global regulation to moderate speech. And my concern is that we'reseeing governments right around the world rolling out hasty policy decisionsthat might actually trigger much more serious consequences when it comes to ourspeech. And even if we could decide which speech to take up or take down, we'venever had so much speech. Every second, millions of pieces of content areuploaded by people right around the world in different languages, drawing onthousands of different cultural contexts. We've simply never had effectivemechanisms to moderate speech at this scale, whether powered by humans or bytechnology.

第二,我现在谈论的大多数言论是合法的。如果我在说的是儿童性虐待图片,或者是煽动暴力的内容,就是另外一回事。公开撒谎可以是完全合法的。但是人们一直在讨论撤下“有问题的”或“有害的”内容,但是并没有对它们是什么有明确的定义,包括马克·扎克伯格,他最近呼吁全球监管来缓和言论。而我的担心是,我们看见了世界各地的政府推出仓促的政策决定,但是它可能实际上触发了对于我们的言论更严重的后果。而且即使我们可以决定哪个言论留住或撤下来,我们从来没有过现在这么多的言论。每一秒,几百万的内容,以不同的语言被世界各地的人上传,吸取了上千种不同的文化背景。我们根本没有有效的机制,无论是通过人工还是技术手段去缓和这种规模的言论内容,

06:18

And third, these companies -- Google,Twitter, Facebook, WhatsApp -- they're part of a wider information ecosystem.We like to lay all the blame at their feet, but the truth is, the mass mediaand elected officials can also play an equal role in amplifying rumors andconspiracies when they want to. As can we, when we mindlessly forward divisiveor misleading content without trying. We're adding to the pollution.

然后第三,这些公司——谷歌,推特,脸书,WhatsApp——它们是广阔的信息生态系统的一部分。我们喜欢把所有责任都推到他们身上,但事实是,大众媒体和民选官员只要他们想,也可以在扩大谣言和阴谋上发挥同等作用。我们也一样,当我们漫不经心地转发分裂性或误导性的内容时,甚至没有费力。我们正在加剧这种“污染”。

06:45

I know we're all looking for an easy fix.But there just isn't one. Any solution will have to be rolled out at a massivescale, internet scale, and yes, the platforms, they're used to operating atthat level. But can and should we allow them to fix these problems? They'recertainly trying. But most of us would agree that, actually, we don't wantglobal corporations to be the guardians of truth and fairness online. And Ialso think the platforms would agree with that. And at the moment, they'remarking their own homework. They like to tell us that the interventions they'rerolling out are working, but because they write their own transparency reports,there's no way for us to independently verify what's actually happening.

我知道我们都在寻找简单的解决方法。但就是一个都没有。任何解决方案都必须以大规模推出,互联网的规模,而且的确,这些平台已经习惯在那种级别的规模上运行。但是我们可以并应该允许他们来解决这些问题吗?他们当然在努力。但是我们大多数人都会同意,实际上,我们不希望跨国公司成为网上真理与公平的守护者。我也认为这些平台也会同意。此时此刻,他们正在展现属于自己的成果。他们想告诉我们他们实施的干预措施正在奏效,但是因为他们编写的是他们自己的透明度报告,我们无法独立验证实际的情况。

07:26

(Applause)

(掌声)

07:29

And let's also be clear that most of thechanges we see only happen after journalists undertake an investigation andfind evidence of bias or content that breaks their community guidelines. Soyes, these companies have to play a really important role in this process, butthey can't control it.

我们也要清楚,大多数我们看到的变化只发生在记者进行了调查并找到了存在违反社区规则的偏见和内容的证据之后。所以是的,这些公司必须在这个过程中扮演重要角色,但是他们无法控制它。

07:47

So what about governments? Many peoplebelieve that global regulation is our last hope in terms of cleaning up ourinformation ecosystem. But what I see are lawmakers who are struggling to keepup to date with the rapid changes in technology. And worse, they're working inthe dark, because they don't have access to data to understand what's happeningon these platforms. And anyway, which governments would we trust to do this? Weneed a global response, not a national one.

那么政府呢?很多人相信全球监管是清理我们信息生态系统的最后希望。但是我看见的是正在努力跟上技术迅速变革的立法者。更糟的是,他们在黑暗中摸索工作,因为他们没有获取数据的权限来了解这些平台上正在发生些什么。更何况,我们会相信哪个政府来做这件事呢?我们需要全球的回应,不是国家的。

08:15

So the missing link is us. It's thosepeople who use these technologies every day. Can we design a new infrastructureto support quality information? Well, I believe we can, and I've got a fewideas about what we might be able to actually do. So firstly, if we're seriousabout bringing the public into this, can we take some inspiration fromWikipedia? They've shown us what's possible. Yes, it's not perfect, but they'vedemonstrated that with the right structures, with a global outlook and lots andlots of transparency, you can build something that will earn the trust of mostpeople. Because we have to find a way to tap into the collective wisdom andexperience of all users. This is particularly the case for women, people ofcolor and underrepresented groups. Because guess what? They are experts when itcomes to hate and disinformation, because they have been the targets of thesecampaigns for so long. And over the years, they've been raising flags, and theyhaven't been listened to. This has got to change. So could we build a Wikipediafor trust? Could we find a way that users can actually provide insights? Theycould offer insights around difficult content-moderation decisions. They couldprovide feedback when platforms decide they want to roll out new changes.

所以缺少的环节是我们。是每天使用这些技术的那些人。我们能不能设计一个新的基础设施来支持高质量信息?我相信我们可以,我已经有了一些想法,关于我们实际上可以做什么。所以首先,如果认真考虑让公众参与进来,我们可以从维基百科汲取一些灵感吗?他们已经向我们展示了可能的方法。是的,它并不完美,但是他们已经用正确的结构,全球的视野和很高很高的透明度证明了你们可以建立一些将赢得大多数人信任的东西。因为我们必须找到一种方法,充分利用集体的智慧和所有用户的经验。妇女,有色人种和未能充分代表大众的群体尤其如此。猜猜为什么?他们是仇恨和虚假信息方面的专家,因为他们很久以来都是这些信息运动的目标。多年来,他们一直摇旗呐喊,但是从来没有被听见。这必须改变。所以我们能否为信任创建一个维基百科?我们能否找到一种用户可以真正提供见解的方法?他们可以对有难度的内容审核决定提出见解。当平台决定要推出新变更时,他们可以提出反馈。

09:28

Second, people's experiences with theinformation is personalized. My Facebook news feed is very different to yours.Your YouTube recommendations are very different to mine. That makes itimpossible for us to actually examine what information people are seeing. Socould we imagine developing some kind of centralized open repository foranonymized data, with privacy and ethical concerns built in? Because imaginewhat we would learn if we built out a global network of concerned citizens whowanted to donate their social data to science. Because we actually know verylittle about the long-term consequences of hate and disinformation on people'sattitudes and behaviors. And what we do know, most of that has been carried outin the US, despite the fact that this is a global problem. We need to work onthat, too.

第二,人们对信息的体验是个性化的。我脸书上的新闻推荐与你们的非常不同。你们的 YouTube 推荐与我的也很不同。这使得我们无法实际检查大家看到的是什么信息。那么,我们是否可以想象为匿名数据开发某种集中式开放存储库,并内置隐私和道德问题?因为想象一下,如果我们建立一个由关心且忧虑的公民组成的全球网络,他们希望将其社交数据捐赠给科学,那么我们将学到什么?因为我们实际上对仇恨和虚假信息对人们态度和行为产生的长期后果知之甚少。而我们知道的是,其中大部分是在美国进行的,尽管这是一个全球性问题。我们也需要为此努力。

10:16

And third, can we find a way to connect thedots? No one sector, let alone nonprofit, start-up or government, is going tosolve this. But there are very smart people right around the world working onthese challenges, from newsrooms, civil society, academia, activist groups. Andyou can see some of them here. Some are building out indicators of contentcredibility. Others are fact-checking, so that false claims, videos and imagescan be down-ranked by the platforms.

第三点,我们可以找到方法来连接个体吗?没有一个部门可以解决这个问题,更不用说非营利组织,初创企业或政府部门了。但是世界各地有那些非常聪明的人们在应对这些挑战,包括新闻编辑部,民间社会组织,学术界和维权组织。你们在这可以看见其中的一些。有些正在建立内容可信度的指标。其他人在做事实核查,以至于虚假声明,视频和图像可以被平台撤下。

10:41

A nonprofit I helped to found, First Draft,is working with normally competitive newsrooms around the world to help thembuild out investigative, collaborative programs. And Danny Hillis, a softwarearchitect, is designing a new system called The Underlay, which will be arecord of all public statements of fact connected to their sources, so thatpeople and algorithms can better judge what is credible. And educators around theworld are testing different techniques for finding ways to make people criticalof the content they consume. All of these efforts are wonderful, but they'reworking in silos, and many of them are woefully underfunded.

我协助建立的一个非营利组织,名叫“初稿”(First Draft),正在与世界各地通常竞争激烈的新闻编辑室合作,以帮助他们建立调查性协作项目。丹尼·希利斯,一个软件设计师,正在设计一个叫做 The Underlay 的新系统,它将记录所有与其来源相连接的公开事实陈述,以便人们和算法可以更好地判断什么是可信的。而且世界各地的教育者在测试不同的技术,以找到能使人们对所看到内容产生批判的方法。所有的这些努力都很棒,但是他们埋头各自为战,而且很多都严重资金不足。

11:18

There are also hundreds of very smartpeople working inside these companies, but again, these efforts can feeldisjointed, because they're actually developing different solutions to the sameproblems.

在这些公司内部也有成百上千的聪明人在努力,但是同样,这些努力让人感到不够连贯,因为他们正在为同样的问题建立不同的解决方案。

11:29

How can we find a way to bring peopletogether in one physical location for days or weeks at a time, so they canactually tackle these problems together but from their different perspectives?So can we do this? Can we build out a coordinated, ambitious response, one thatmatches the scale and the complexity of the problem? I really think we can.Together, let's rebuild our information commons.

我们怎么能找到一种方法,把这些人同时聚集在同一个地点几天或几周,这样他们可以真正从不同角度共同解决这些问题?那么我们能做到吗?我们能否建立一种协调一致,雄心勃勃的应对措施,使其与问题的规模和复杂性相匹配?我真的认为我们可以。加入我们,重建我们的信息共享吧。

11:52

Thank you.

谢谢。

11:54

(Applause)

掌声

用户搜索

疯狂英语 英语语法 新概念英语 走遍美国 四级听力 英语音标 英语入门 发音 美语 四级 新东方 七年级 赖世雄 zero是什么意思济宁市大华小区英语学习交流群

  • 频道推荐
  • |
  • 全站推荐
  • 推荐下载
  • 网站推荐