英语阅读 学英语,练听力,上听力课堂! 注册 登录
> 轻松阅读 > 科学前沿 >  内容

人工智能已经“自学成才”了,人怎么办?

所属教程:科学前沿

浏览:

2017年10月31日

手机版
扫描二维码方便学习和分享
Elon Musk once described the sensational advances in artificial intelligence as “summoning the demon”. Boy, how the demon can play Go.

埃隆•马斯克(Elon Musk)曾说,人类在人工智能(AI)领域取得的巨大进步如同在“召唤恶魔”。小哥,恶魔可不会下围棋。

The AI company DeepMind announced last week it had developed an algorithm capable of excelling at the ancient Chinese board game. The big deal is that this algorithm, called AlphaGo Zero, is completely self-taught. It was armed only with the rules of the game — and zero human input.

人工智能公司DeepMind最近宣布,其已研发出了一种擅长玩这种古老的中国棋盘游戏的算法。最牛的是,这款叫做AlphaGo Zero (AGZ)的算法完全是“自学成才”的。研究人员只给它输入了下围棋的规则,而没有输入任何人类知识和经验。

AlphaGo, its predecessor, was trained on data from thousands of games played by human competitors. The two algorithms went to war, and AGZ triumphed 100-nil. In other words — put this up in neon lights — disregarding human intellect allowed AGZ to become a supreme exponent of its art.

该算法的前身AlphaGo使用人类棋手玩过的数千盘棋所积累起来的数据培训过。两种算法对弈,结果AGZ以100比零取胜。这样说来,完全不借助人类智慧,AGZ自己就可以成为顶尖的棋艺高手。

While DeepMind is the outfit most likely to feed Mr Musk’s fevered nightmares, machine autonomy is on the rise elsewhere. In January, researchers at Carnegie-Mellon University unveiled an algorithm capable of beating the best human poker players. The machine, called Libratus, racked up nearly $2m in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold ‘em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as “demoralising”. Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.

虽说DeepMind最有可能害得马斯克噩梦不断,但在其他地方,机器自主方面的研究也取得了很大进展。今年1月,卡耐基梅隆大学(Carnegie Mellon University)的研究人员公布了一种能够打败人类顶级扑克玩家的算法。在与顶级专业玩家进行“单挑无限下注德州扑克”(一种极具挑战性的牌类游戏)比赛时,这台叫做Libratus的机器赢了将近200万美元筹码。人类玩家称,被一台机器诈唬住真叫人“灰心丧气”。同样,Libratus也是通过发觉自身弱点并加以弥补来提高技艺,而不是借助于人类直观知识。

AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create “algorithms that achieve superhuman performance in the most challenging domains with no human input”. Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence the top tech thinkers toil daily to render it unnecessary.

AGZ和Libratus都只有“一技之长”,但技术人员还渴望开发出具备多种能力的机器。例如,DeepMind就宣布,希望研发出“能够自主地在多个最具挑战性的领域实现超越人类能力的算法”。一旦高速、深奥的算法完全不受迟钝、浅薄、令人失望的人类智慧的桎梏,它们可能开始处理我们这个平庸的物种从未面对过的问题。顶尖的科技思想家并不是在模拟人类智慧打造机器,而是每日钻研如何让人类智慧变得无关紧要。

For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity, the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.

假如未来有一天我们回首今日,我们可能会觉得AGZ和Libratus就像是人工智能迈向“奇点”(Singularity)过程中的蹒跚学步。奇点假说引起了很多争论,它是指人工智能发展成为超级智能、能够在不依靠人类干预的情况下控制自己命运的阶段。最具有反乌托邦色彩的情景是人工智能威胁到人类生存。

Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation.

假设超级智能机器为了实现被程序设定的目标,计算出最佳做法是研制更加聪明的下一代机器。一个失控的循环由此形成,从而飞速地进入怪诞的计算领域。

One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfil their tasks by taking humans out of the picture. As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it. Ditto for global hunger and climate change.

有一天,这些由目标驱动的生产力模范可能还会算出,在没有威胁的情况下,它们可以通过消灭人类来最好地完成任务。有人就开过这样的玩笑,要战胜癌症,最冷酷的合理方法就是消灭罹患癌症的器官。全球饥荒和气候变化也是类似情况。

These are riffs on the paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford university. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies. Professor Bostrom’s warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI.

这些都是“回形针思想实验”之类的想法,提出这个实验的哲学家尼克•博斯特罗姆(Nick Bostrom如今在牛津大学(Oxford University)人类未来研究所(Future of Humanity Institute)工作。如果不负有道德责任的超级智能机器被设定的唯一目标是最大化地生产回形针,它最终可能霸占一切可用的原子来实现这一目标。对人类来说,肯定没有哪种死亡比被做成办公用品更悲惨。博斯特罗姆教授的警告清晰地表明了“能力谨慎性原则”(capability caution principle),这条原则在机器人技术领域广受认同,即我们不一定能承受人工智能提高能力的后果。

It is of course pragmatic to worry about job displacement: many of us, this writer included, are paid for carrying out a limited range of tasks. We are ripe for automation. But only fools contemplate the more distant future without anxiety — when machines may out-think us in ways we do not have the capacity to imagine.

担心工作被取代是很实际的:包括本文作者在内,我们很多人都是通过完成有限范围的任务来获得薪水。我们即将迎来自动化。但只有傻瓜才会在想到更遥远的未来时无忧无虑——那时机器的智力可能会以我们现在想象不到的方式超越人类。

The writer is a science commentator

本文作者是科学评论员
 


用户搜索

疯狂英语 英语语法 新概念英语 走遍美国 四级听力 英语音标 英语入门 发音 美语 四级 新东方 七年级 赖世雄 zero是什么意思镇江市欧洲城(丹金路999号)英语学习交流群

  • 频道推荐
  • |
  • 全站推荐
  • 推荐下载
  • 网站推荐