menu-control
The Jerusalem Post

Artificial intelligence takes another and perhaps alarming step

 
 Microsoft headquarters. Fear of misuse  (photo credit: REUTERS)
Microsoft headquarters. Fear of misuse
(photo credit: REUTERS)

Microsoft introduces VASA-1, an AI technology transforming still images into realistic talking faces with adapted facial expressions. Misuse prevention strategies are currently unspecified.

Artificial intelligence takes another and perhaps worrying step: Microsoft has unveiled a new experimental AI tool called VASA-1 that can turn a still image of a person or a painting, combined with an audio file, into a video of a life-like talking face.

The new technology has the ability to create facial expressions and head movements adapted to the still image, and synchronized to lip movements suitable for speaking or singing. Microsoft scientists have uploaded several samples to the project page and the results look good enough to fool people into thinking they are real.

While the lip and head movements in Microsoft's demo are a little robotic and out of sync on closer inspection, it's clear that the technology can be abused to easily and quickly create fake videos of real people.

The developers themselves are aware of this potential and have decided not to publish a link to the technology yet until they are sure that the technology will be sufficiently protected. Microsoft did not specify whether it plans to put in place certain safeguards that would prevent misuse such as fake news, the spread of lies or even the spread of fake porn.

Advertisement

Microsoft believes that the new technology has advantages despite the potential for misuse. For example, according to them, it will be possible to improve the accessibility of paralyzed people who will be able to communicate using technology using an "avatar" that will be built in their likeness based on a photo only.

×
Email:
×
Email: