Most people are confident in distinguishing authenticity, but the rise of deep fakes may make this ability increasingly difficult.
Deepfake is digital content created by artificial intelligence (AI), which can cover audio, video and still images and looks extremely real. It often depicts characters speaking and doing things that are not true. Cybersecurity experts and technology experts are increasingly warning that this content could be used in a variety of malicious ways, including fraudulent schemes or spreading misinformation.
Identifying the Risk
One area of focus that experts have warned about the rise of deepfakes is celebrities and politicians, but attacks can also be targeted at individuals.
IEEE member Rebecca Herold shared an anecdote illustrating the risks of deepfakes. "A friend of mine received a call from his wife while he was on a business trip," Herold said. "She was very upset, sobbing and told him she had been in an accident and didn't have the money to call a tow truck. My friend almost believed the call and it sounded like it was his wife. But the friend told his wife that he needed to make a wire transfer to the tow truck company, and after hanging up the phone, he dialed his wife’s phone number. He said that he was almost cheated, and the voice was very real. However, his wife was very Safe."
While deepfakes can be incredibly convincing, they often leave subtle clues that they may not be real. IEEE member Yale Fox said: "Currently, deepfakes are not completely indistinguishable from real videos, but we are getting closer. Most people can still identify deepfake videos."
Artificial Intelligence is excellent at shaping a positive image of others, but still has problems with details, such as side and back viewing. The image needs to be viewed from all angles. Deepfake photos and videos tend to have more teeth than real teeth. If someone smiles or shows teeth, look closely
• Deepfake photos and videos also often give people too many or too few fingers. Take care to count how many fingers are on the subject of your photo or video.
There are also some problems with deepfake technology when it comes to side and angle views of people. If you're communicating with someone via live video and you suspect it's a deepfake, you can ask them to look to the side or ask a question that will make their heads turn, like: "Hey, I like the wall behind you That painting on. Which artist is this by?”
• Deepfake photos and videos often contain inconsistencies such as lighting, reflections, and shadows.
• Deepfake videos often contain fast but unnatural movements. For example, look for/unusual jerky movements or seeming jumps in time.
Deepfake audio often has some unnatural or inconsistent audio quality, which may occasionally occur
A RACE FOR NEW TECHNOLOGY
Some techniques assess suspicious content itself. For example, some tools might evaluate blood flow on a person's face, looking for unnatural blending or blurring. Or look at the reflection of a person's eyes to see if it matches their surroundings as an indicator of authenticity. A comprehensive discussion hosted by the IEEE Signal Processing Society demonstrated some of these techniques in practice.
Other tools may examine a file's metadata for traces of manipulation
IEEE Senior Member Kayne MGladrey said: "Similarly, those involved in deepfake threats may learn from detection algorithms and Adapt their technology accordingly. What is very important here is that major distributors of video and audio content need to invest in deploying these solutions at scale to prevent the spread of misinformation or disinformation."
The above is the detailed content of In-depth understanding of deep fake technology: IEEE exploration and research. For more information, please follow other related articles on the PHP Chinese website!