Which of these destination wedding locations sounds most interesting to you?
Disneyworld
A castle in Scotland
A resort in Bali
An African game reserve
I'd rather get married at home
View results

 
 
 
April 9 , 10
202012_05
  Technology
  DEEPFAKES
When seeing shouldn’t be believing
  深度偽造
  眼見再也不能為憑
  by Elizabeth Sunshine

 

As technology improves, it is becoming easier and easier to make fake audio, video or images of people that are utterly convincing. These deepfakes are made using a process called deep learning. Deep learning pits two artificial intelligence (AI) programs against each other. One produces fake content. The other receives a stream of real and fake content and tries to figure out which are which. Both programs learn from each other. The process continues until the program producing the fakes can fool the other program. This can result in very convincing footage of people doing and saying things they never did or said.

  Deepfakes are dangerous for a number of reasons. Some experts fear that they could influence elections. If a persuasive deepfake of a politician saying something controversial comes out just before an election, it might influence voters. Deepfakes could also be used to embarrass or blackmail people for things they never said or did.

  Other experts fear that deepfakes can erode trust. Not only might people be fooled by fake videos, but they could also be tricked into thinking real videos are actually deepfakes. This could lead to people giving up trying to distinguish fact from fiction.

 

Want to read more? The complete article is available in our e-magazine. Click here to order your APP today!


想多讀一些嗎?在空英商城購買「空中英語教室」雜誌APP可以跨iOS及Android兩大系統使用,並提供各種學習功能。 立刻點擊此處訂閱

 

 

Home Advanced Contents