BBC英语 学英语,练听力,上听力课堂! 注册 登录
> BBC > BBC news > 2023年BBC新闻听力 >  内容

双语新闻:如何找到人工智能“深度伪造”图像

所属教程:2023年BBC新闻听力

浏览:

tingliketang

2024年03月28日

手机版
扫描二维码方便学习和分享
Fake photos, videos, and audio are spreading online as a result of the rise and misuse of artificial intelligence (AI) tools. And it is getting harder to tell what is real from what is not.
由于人工智能(AI)工具的兴起和滥用,虚假照片、视频和音频正在网上传播。而且越来越难以分辨真假。

Video and image creators like DALL-E, Midjourney, and OpenAI's Sora make it easy for people with little technical skills to create "deepfakes."
像DALL-E、Midjourney和OpenAI的Sora这样的视频和图像创作者让没有什么技术技能的人很容易就能创作出“深度伪造”。

The fake images might seem harmless. But they can be used to cheat people out of money, steal identities, spread propaganda, and unfairly influence elections.
这些假图像可能看起来无害。但它们可以用来骗取人们的钱财,窃取身份,传播宣传,不公平地影响选举。

How to tell it is a deepfake
如何判断它是深度造假

Just a year ago, the technology was far from perfect and it was easier to tell that a photo had been created with AI. Fake images then showed clear errors, like hands with six fingers or eyeglasses with different shapes.
就在一年前,这项技术还远远不够完美,很容易看出一张照片是用人工智能创建的。假图像显示出明显的错误,比如有六个手指的手或不同形状的眼镜。

But as AI has improved, it has become a lot harder.
但随着人工智能的进步,它变得困难得多。

Henry Ajder is founder of the AI advising company Latent Space Advisory and a leading expert in generative AI. He said some widely shared advice — like looking for unnatural eye movements in people in deepfake videos ­— no longer holds.
Henry Ajder是人工智能咨询公司Latent Space Advisory的创始人,也是生成式人工智能领域的领先专家。他说,一些广为流传的建议——比如在深度造假视频中寻找人们不自然的眼球运动——不再成立。

Still, there are some things to look for, he said.
不过,他说,仍有一些东西需要寻找。

Ajder said a lot of AI deepfake photos, especially of people, have a "smoothing effect" that leaves skin looking very "polished." He warned, however, that AI can sometimes change the photos and remove the signs of AI creation.
Ajder说,很多人工智能深度假照片,尤其是人物照片,都有一种“平滑效果”,让皮肤看起来非常“光滑”。然而,他警告说,人工智能有时会改变照片,并消除人工智能创造的迹象。

Look at shadows and lighting. Often the subject is clear and appears lifelike, but elements in the rest of the photo might not seem so real or polished.
看看阴影和光线。通常主体是清晰的,看起来栩栩如生,但照片的其他元素可能看起来不那么真实或抛光。

Look at the faces
看看这些脸

One of the most common deepfakes is exchanging one face for another. The practice is called face-swapping.
最常见的深度伪造之一是将一张脸换成另一张脸。这种做法被称为换脸。

Experts advise looking closely at the edges of the face. Does the facial skin color match the rest of the head or body? Are the edges of the face sharp or unclear?
专家建议仔细观察脸部边缘。面部皮肤的颜色与头部或身体的其他部分匹配吗?脸部边缘是尖锐还是不清晰?

If you suspect video of a person speaking has been changed by AI, look at their mouth. Do their lip movements line up with the audio perfectly?
如果你怀疑一个人说话的视频被人工智能改变了,看看他们的嘴。他们的嘴唇动作是否与声音完全一致?

Cybersecurity company Norton says that the technology is not ready to create individual teeth. So Ajder suggests looking at their teeth. Are they clear, or are they unclear and somehow do not appear like they would in real life?
网络安全公司诺顿表示,这项技术还没有准备好制造个人牙齿。所以Ajder建议观察它们的牙齿。它们是清晰的,还是不清晰,不像在现实生活中那样出现?

Sometimes the context of the photo is important. Take a minute to consider if what you are seeing could actually happen.
有时候照片的背景很重要。花一分钟考虑一下你所看到的是否真的会发生。

The Poynter journalism website advises that if you see a well-known person do something that seems unrealistic or unlike themselves, it could be a deepfake.
波因特新闻网站建议,如果你看到一个知名人士做了一些看起来不现实或不像自己的事情,那可能是深度造假。

Using AI to find the fakes
用人工智能来找假货

Another method is to use AI to fight AI.
另一种方法是用AI来对抗AI。

Microsoft has developed a tool that can study photos or videos and rate if it has been changed. Technology company Intel's FakeCatcher uses computer programs to study the smallest parts of an image, called pixels, to say if it is real or fake.
微软开发了一种工具,可以研究照片或视频,并对其是否被修改进行评分。科技公司英特尔的FakeCatcher使用计算机程序来研究图像的最小部分,即像素,以判断它是真的还是假的。

However, some of these tools are not available to the public. That is because researchers do not want to help bad actors improve their deepfakes.
然而,其中一些工具并不向公众开放。这是因为研究人员不想帮助不良演员提高他们的深度造假。

All this being said, AI has been developing very quickly. And AI models are being trained on internet data to produce increasingly better content with fewer mistakes.
综上所述,人工智能的发展非常迅速。人工智能模型正在接受互联网数据的训练,以便以更少的错误产生越来越好的内容。

That means this advice to find deepfakes could be incorrect even a year from now.
这意味着,即使从现在起一年后,寻找深度造假的建议也可能是不正确的。

Experts say it might even be dangerous to suggest the average person can find deepfakes. That is because even for trained eyes, it is becoming increasingly difficult.
专家表示,认为普通人可以发现深度假货甚至可能是危险的。这是因为即使对于训练有素的眼睛来说,它也变得越来越困难。

用户搜索

疯狂英语 英语语法 新概念英语 走遍美国 四级听力 英语音标 英语入门 发音 美语 四级 新东方 七年级 赖世雄 zero是什么意思苏州市白兰苑英语学习交流群

  • 频道推荐
  • |
  • 全站推荐
  • 推荐下载
  • 网站推荐