英语听力汇总   |   演讲MP3+双语文稿:人类和人工智能如何共同创造更好的业务

https://online2.tingclass.net/lesson/shi0529/10000/10387/tedyp432.mp3

更新日期:2022-01-19浏览次数:0次所属教程:TED音频

-字号+

听力原文

听力课堂TED音频栏目主要包括TED演讲的音频MP3及中英双语文稿,供各位英语爱好者学习使用。本文主要内容为演讲MP3+双语文稿:人类和人工智能如何共同创造更好的业务,希望你会喜欢!

【演讲者及介绍】Sylvain Duranton

Sylvain Duranton是BCG GAMMA的全球领导者,该公司致力于将数据科学和高级分析应用于商业。

【演讲主题】人类和人工智能如何共同创造更好的业务

How humans and AI can work together to create better businesses

【中英文字幕】

翻译者 奕含 董 校对者 Yolanda Zhang

Let me share a paradox. For the last 10 years, many companies have been trying to become less bureaucratic, to have fewer central rules and procedures, more autonomy for their local teams to be more agile. And now they are pushing artificial intelligence, AI, unaware that cool technology might make them more bureaucratic than ever. Why? Because AI operates just like bureaucracies.

我来分享一个矛盾。在过去十年中,很多公司都想摆脱官僚化,通过减少职务,精简程序,给团队更多自主权,让公司运作更灵活。现在公司开始引进人工智能,AI,却没意识到这个很酷的科技可能让他们变得更加官僚。为什么呢?因为AI的运作方式就很官僚。

The essence of bureaucracy is to favor rules and procedures over human judgment. And AI decides solely based on rules. Many rules inferred from past data but only rules. And if human judgment is not kept in the loop, AI will bring a terrifying form of new bureaucracy -- I call it "algocracy" -- where AI will take more and more critical decisions by the rules outside of any human control. Is there a real risk? Yes.

官僚的本质就是看重规则和程序,而非人类自身的判断,而且只根据规则做决策。虽然AI是依据原有规则形成的,但只有规则。若我们抛弃人类的判断,运用AI将带来可怕的新官僚主义——我称之为AI官僚主义(algocracy),也就是说AI将脱离人类的控制,仅凭规则做出越来越多重要决策。这有风险吗?当然有。

I'm leading a team of 800 AI specialists. We have deployed over 100 customized AI solutions for large companies around the world. And I see too many corporate executives behaving like bureaucrats from the past. They want to take costly, old-fashioned humans out of the loop and rely only upon AI to take decisions. I call this the "human-zero mindset." And why is it so tempting? Because the other route, "Human plus AI," is long, costly and difficult. Business teams, tech teams, data-science teams have to iterate for months to craft exactly how humans and AI can best work together. Long, costly and difficult. But the reward is huge.

我领导的团队由800名AI专家组成,我们为很多全球的大公司量身打造了上百个AI系统。我看过太多的公司高管因此重拾了过往的官僚做派。他们对麻烦又老套的人类决策嗤之以鼻,完全依赖AI来做决策。我称之为无人类思维(human-zeromindset)。可为何这种思维这么诱人?因为另一种思维——人类+AI费时、费钱、又费力。商业团队、科技团队和数据科学团队不得不花费几个月的功夫,探索人类和AI如何更好地合作。探索过程漫长艰难,花了很多钱,但取得了巨大成果。

A recent survey from BCG and MIT shows that 18 percent of companies in the world are pioneering AI, making money with it. Those companies focus 80 percent of their AI initiatives on effectiveness and growth, taking better decisions -- not replacing humans with AI to save costs.

根据波士顿咨询公司和麻省理工大学最近的调查,全球有18%的公司都在推动AI的发展,希望借此盈利。这些公司80%的人工智能计划都集中在效率和增长上,以做出更好的决策——而不是用AI取代人类以减少开支。

Why is it important to keep humans in the loop? Simply because, left alone, AI can do very dumb things. Sometimes with no consequences, like in this tweet. "Dear Amazon, I bought a toilet seat. Necessity, not desire. I do not collect them, I'm not a toilet-seat addict. No matter how temptingly you email me, I am not going to think, 'Oh, go on, then, one more toilet seat, I'll treat myself.' " (Laughter)

为什么人类的作用必不可少?原因很简单:没有人类,AI会干傻事。有时候AI的工作毫无价值,就像这条推文讲的:“亲爱的亚马逊公司,我之前买了一个马桶圈。生活必需品,不是什么癖好。我不收藏马桶圈,我没有马桶圈瘾。不管你的广告邮件多诱人,我都不会觉得‘哦,受不了,只好再买个马桶圈了,偶尔放纵一下自己。’”(笑声)

Sometimes, with more consequence, like in this other tweet. "Had the same situation with my mother's burial urn."

有时,AI又“太有帮助”,像这条推文:“我在为妈妈买了骨灰盒后遇到了同样的状况。”

(Laughter)

(笑声)

"For months after her death, I got messages from Amazon, saying, 'If you liked that ...' "

“在她去世后的几个月里,亚马逊给我发的邮件都是‘根据你的购物历史,你可能喜欢…(骨灰盒)’”

Sometimes with worse consequences. Take an AI engine rejecting a student application for university. Why? Because it has "learned," on past data, characteristics of students that will pass and fail. Some are obvious, like GPAs. But if, in the past, all students from a given postal code have failed, it is very likely that AI will make this a rule and will reject every student with this postal code, not giving anyone the opportunity to prove the rule wrong.

有时结果更糟。比如说AI曾经拒绝了一名学生的大学申请。为什么?因为这个AI从以前的数据“学”到了哪些学生会通过,哪些学生不能——有一些指标很明确,比如绩点。但如果在过去,某个地区学生都没通过,AI很可能就此定下规则,然后拒绝所有来自这个地区的学生,不给任何人证明规则有误的机会。

And no one can check all the rules, because advanced AI is constantly learning. And if humans are kept out of the room, there comes the algocratic nightmare. Who is accountable for rejecting the student? No one, AI did. Is it fair? Yes. The same set of objective rules has been applied to everyone. Could we reconsider for this bright kid with the wrong postal code? No, algos don't change their mind.

并且没有人能够筛查掉这样的规则,因为先进的AI一直在学。那么如果直接用AI取代人类,迎来的将是AI官僚主义的噩梦:谁应该对学生的被拒负责?没有谁,AI来负责。这公平吗?公平。因为所有学生都用同一规则判定。那可不可以重新考虑这个“住错了地方”的聪明学生?不行,AI算法不会改变主意。

We have a choice here. Carry on with algocracy or decide to go to "Human plus AI." And to do this, we need to stop thinking tech first, and we need to start applying the secret formula. To deploy "Human plus AI," 10 percent of the effort is to code algos; 20 percent to build tech around the algos, collecting data, building UI, integrating into legacy systems; But 70 percent, the bulk of the effort, is about weaving together AI with people and processes to maximize real outcome.

我们需要做出选择:继续AI的独裁,还是考虑“人类+AI”思维?要拥有这种思维,我们不能再优先考虑技术,而是要从秘密公式入手。要实现“人类+AI”,需要10%的编程算法;20%的科技成分,包括收集数据,构建用户界面,整合进遗留系统;其余70%是最重要的,是结合AI和人类的方法,让结果最接近完美。

AI fails when cutting short on the 70 percent. The price tag for that can be small, wasting many, many millions of dollars on useless technology. Anyone cares? Or real tragedies: 346 casualties in the recent crashes of two B-737 aircrafts when pilots could not interact properly with a computerized command system.

如果这70%被削减,AI就会出现问题。代价可以很小,只是在无用科技上浪费数百万美元。谁会在乎呢?但代价也可以大到无法承受:最近两起波音737空难造成了346人遇难,原因都是电脑控制的飞行系统没有正确回应飞行员的指令。

For a successful 70 percent, the first step is to make sure that algos are coded by data scientists and domain experts together. Take health care for example. One of our teams worked on a new drug with a slight problem. When taking their first dose, some patients, very few, have heart attacks. So, all patients, when taking their first dose, have to spend one day in hospital, for monitoring, just in case. Our objective was to identify patients who were at zero risk of heart attacks, who could skip the day in hospital. We used AI to analyze data from clinical trials, to correlate ECG signal, blood composition, biomarkers, with the risk of heart attack. In one month, our model could flag 62 percent of patients at zero risk. They could skip the day in hospital. Would you be comfortable staying at home for your first dose if the algo said so?

要成功实现那70%,第一步就要保证算法编程由数据科学家和领域专家共同完成。拿医疗领域举例,我们有一个团队曾经处理过一种药产生的小问题。在首次服用这种药后,有很少一部分患者会诱发心脏病。于是所有第一次服用这种药的患者都要住院观察一天,以防心脏病发作。我们想区分出完全不可能发心脏病的患者,这样他们就不用在医院多待一天。我们用AI分析了临床试验的数据,寻找心电图、血液成分、生物标记和心脏病发作风险之间的关系。在一个月内,我们训练的模型就能标记出62%的零发病风险患者。这样,这些患者就不必白白在医院呆上一天。但是,你会放心地在第一次服药后直接回家,就因为AI说你可以回家了?

(Laughter)

(笑声)

Doctors were not. What if we had false negatives, meaning people who are told by AI they can stay at home, and die?

医师也不会放心。万一出现了错误结果呢?也就是说,AI叫他们回家等死?

(Laughter)

(笑声)

There started our 70 percent. We worked with a team of doctors to check the medical logic of each variable in our model. For instance, we were using the concentration of a liver enzyme as a predictor, for which the medical logic was not obvious. The statistical signal was quite strong. But what if it was a bias in our sample? That predictor was taken out of the model. We also took out predictors for which experts told us they cannot be rigorously measured by doctors in real life. After four months, we had a model and a medical protocol. They both got approved my medical authorities in the US last spring, resulting in far less stress for half of the patients and better quality of life. And an expected upside on sales over 100 million for that drug.

这就需要那70%的作用了。我们与医师团队合作,检验模型中变量的医学合理性。比方说,我们用肝酶浓度作为预测变量,这里的医学逻辑并不明显,但从统计信号角度看,与结果有很大关系。但万一它是个偏置项呢?(注:即该变量与心脏病无实际关联)所以这个变量会被剔除。我们还剔除了一些变量,因为医师无法精准测出这些变量。四个月后,我们训练出了模型,制定了医学使用协议。它们都获批通过。去年春天,与我们合作的美国医疗机构,为一半服用这种药的患者减轻了压力,提高了生活品质。且这种药的销量迅速增加,超过了一亿份。

Seventy percent weaving AI with team and processes also means building powerful interfaces for humans and AI to solve the most difficult problems together. Once, we got challenged by a fashion retailer. "We have the best buyers in the world. Could you build an AI engine that would beat them at forecasting sales? At telling how many high-end, light-green, men XL shirts we need to buy for next year? At predicting better what will sell or not than our designers." Our team trained a model in a few weeks, on past sales data, and the competition was organized with human buyers. Result? AI wins, reducing forecasting errors by 25 percent. Human-zero champions could have tried to implement this initial model and create a fight with all human buyers. Have fun. But we knew that human buyers had insights on fashion trends that could not be found in past data.

人类团队和方法造就的70%,也意味着在人类和AI之间建立了坚固的联结,以共同解决最难的问题。以前有一个时装零售商问我们:“时装零售商都很会进货,你能不能做一个AI在预测销量上超过他们?要卖多少件高端服装、浅绿色衣服、加大码男衬衫,能赚到最多钱?能不能预测哪些衣服会大卖,预测得比设计师还准?”我们的团队在几周内用以往销量数据训练出模型,和人类商家比赛。猜猜谁赢了?AI胜出,预测错误率比人类低25%。零人类思维者可能会改进模型,投入和人类商家的竞争。开心就好。但我们知道,人类买家对时尚潮流有远见,这是AI在以往数据学不到的。

There started our 70 percent. We went for a second test, where human buyers were reviewing quantities suggested by AI and could correct them if needed. Result? Humans using AI ... lose. Seventy-five percent of the corrections made by a human were reducing accuracy.

于是我们转向那70%,我们开始了第二次测试。人类商家来复查AI推算的购买量,然后做出必要纠正。结果如何?使用AI的人类商家……输了。人类做出的纠正中,有75%都在降低AI准确率。

it time to get rid of human buyers? No. It was time to recreate a model where humans would not try to guess when AI is wrong, but where AI would take real input from human buyers. We fully rebuilt the model and went away from our initial interface, which was, more or less, "Hey, human! This is what I forecast, correct whatever you want," and moved to a much richer one, more like, "Hey, humans! I don't know the trends for next year. Could you share with me your top creative bets?" "Hey, humans! Could you help me quantify those few big items? I cannot find any good comparables in the past for them." Result? "Human plus AI" wins, reducing forecast errors by 50 percent. It took one year to finalize the tool. Long, costly and difficult. But profits and benefits were in excess of 100 million of savings per year for that retailer.

是不是要放弃人类商家的介入了?不是。我们要重新搭建一个模型,这一次,不让人类猜AI的对错,而是让AI寻求人类的建议。我们将模型改头换面,抛弃了最初的交互方式:“嘿人类!这是我的预测,帮我纠正一下吧!”改进后的交互方式变得更广泛,像这样:“嘿人类!我不懂明年的流行趋势,可不可以告诉我你押宝在哪?”“嘿人类!可以帮我看看这些大家伙吗?它们超出了我的认知范围。”结果如何?“人类+AI”胜出,这次预测错误率降低了50%。我们花了一年才最终完成这个工具,漫长、成本高,还很艰难,但利润很丰厚,好处很多,每年为零售商节省了超过一亿美金。

Seventy percent on very sensitive topics also means human have to decide what is right or wrong and define rules for what AI can do or not, like setting caps on prices to prevent pricing engines [from charging] outrageously high prices to uneducated customers who would accept them. Only humans can define those boundaries -- there is no way AI can find them in past data.

在一些特定议题上,70%也意味着人类要决定对错,定下规则限制AI的权力。例如设定价格上限,防止AI粗暴地抬价,向不知情的顾客漫天要价。只有人类能够设定界限,因为AI不可能从以往数据学到。

Some situations are in the gray zone. We worked with a health insurer. He developed an AI engine to identify, among his clients, people who are just about to go to hospital to sell them premium services. And the problem is, some prospects were called by the commercial team while they did not know yet they would have to go to hospital very soon. You are the CEO of this company. Do you stop that program? Not an easy question.

有时候我们可能遇到灰色地带。我们曾和保险公司有过合作,他们开发了一个针对客户的AI系统,用来识别快要去治病的客户,向他们推销附加产品。问题是,一些接到推销电话的客户,这时候并不知道他们很可能马上要去医院看病。如果你是这家公司的执行长,你会取消这个项目吗?这是个两难的抉择。

And to tackle this question, some companies are building teams, defining ethical rules and standards to help business and tech teams set limits between personalization and manipulation, customization of offers and discrimination, targeting and intrusion.

为了解决这个问题,一些公司正在组建团队,帮商业和科技团队制定伦理规则和标准,在个性化和可操作性间寻找平衡点,区别意见和偏见,分清关照和冒犯。

I am convinced that in every company, applying AI where it really matters has massive payback. Business leaders need to be bold and select a few topics, and for each of them, mobilize 10, 20, 30 people from their best teams -- tech, AI, data science, ethics -- and go through the full 10-, 20-, 70-percent cycle of "Human plus AI," if they want to land AI effectively in their teams and processes. There is no other way.

我坚信在每家公司,把AI运用到关键之处定会有巨大回报。商业领袖们要大胆尝试,选择一些项目,为每个项目召集几十个领域佼佼者——科技、AI、科学、伦理——然后完成10%、20%、70%的“人类+AI”目标。这样AI就可以和人类高效合作。除此之外别无他法。

Citizens in developed economies already fear algocracy. Seven thousand were interviewed in a recent survey. More than 75 percent expressed real concerns on the impact of AI on the workforce, on privacy, on the risk of a dehumanized society. Pushing algocracy creates a real risk of severe backlash against AI within companies or in society at large. "Human plus AI" is our only option to bring the benefits of AI to the real world. And in the end, winning organizations will invest in human knowledge, not just AI and data. Recruiting, training, rewarding human experts. Data is said to be the new oil, but believe me, human knowledge will make the difference, because it is the only derrick available to pump the oil hidden in the data.

经济飞速发展的同时,公民已对AI官僚主义产生了恐惧。在近期的一项针对七千人的调研中,超过75%的人表示了担忧,担心AI影响就业、隐私,担心社会会失去人性。AI官僚主义的出现会导致公司和社会对AI的强烈抵触。“人类+AI”是唯一选项,只有这样才能让AI真正带来福祉。最后,因AI获利的组织,要为人类智慧投资,而不仅仅投资AI和数据。聘募、培养、奖励人类专家。有人说数据是新的燃料,但相信我,人类知识能改变世界。因为人类知识是唯一的泵,能将蕴藏于数据的“燃料”源源不断地泵出。

Thank you.

谢谢大家。

(Applause)

(掌声)