in

Hackers use AI face-changing technology to apply for artificial intelligence security issues that cannot be ignored

These positions involve information technology, computer programs, databases and software-related fields, and some job seekers attempt to use others’ backgrounds and expertise to get jobs, and fake videos through deepfake technology.

They found that when interviewing with job applicants online, the movements or opening and closing of their lips did not match their voices, such as when sneezing or coughing, the screen did not synchronize.

When they backtracked on these candidates, they found that some were actually using someone else’s identities to get jobs. If it’s just getting a job, it’s a small problem, but if it’s a hacker, once they successfully sign up, they can successfully get into the enterprise and get access to confidential data.

Are you also curious, is this software so easy to use?

The answer is that it is indeed very advanced.

Deepfake utilizes the powerful image generation capability of Generative Adversarial Networks (GANs), which can combine and superimpose arbitrary existing images and videos onto source images and videos, and it can record the details of a person’s face. After years of development, Deepfake technology has been able to change faces in real time, and there is no sense of violation.

However, Deepfakes struggled to create high-confidence animations of dynamic facial expressions in videos, which either never blinked, or blinked too frequently or unnaturally. And the audio and the dummy image will also not cooperate naturally.

Therefore, this kind of video will make people suspicious if it lasts for 10 seconds. The whole interview process will be longer, and it will be easier to reveal flaws.

The progress and change of science and technology is a double-edged sword.

Although artificial intelligence technology provides us with a lot of convenience, it may also bring a series of problems such as security, ethics, and privacy.

The essence of artificial intelligence development is to solve deterministic problems in a complete information and structured environment through algorithms, computing power and data. In this data-supported era, artificial intelligence faces many security risks.

First, there is the possibility of facing a poisoning attack.
That is, hackers inject malicious data to reduce the reliability and accuracy of AI systems, resulting in incorrect AI decision-making. Adding fake data, malicious samples, etc. to the training data destroys the integrity of the data, which in turn leads to deviations in the decision-making of the trained algorithm model.

If this kind of operation is used in the field of autonomous driving, it is likely to cause the vehicle to violate traffic rules or even cause traffic accidents.

Secondly, there will be the problem of data leakage.
Reverse attacks can lead to data leakage inside the algorithm model. Now various smart devices such as smart bracelets, smart speaker biometric identification systems, and smart medical systems are widely used, and personal information is collected in all directions. Including faces, fingerprints, voiceprints, irises, heartbeats, genes, etc., these information are unique and immutable. Once leaked or abused, it will have serious consequences.

For example, it has been exposed that a large number of face photos collected by a large number of domestic stores have been leaked without the consent of users. These face photos have more or less been leaked on the black industry, which will lead to fraud or financial security. risks of.

Again, there will be cyber risks.
Artificial intelligence will inevitably introduce network connections, and artificial intelligence technology itself can also improve the intelligence level of network attacks, and then conduct intelligent data theft and data extortion attacks or automatically generate a large amount of false threat intelligence to attack the analysis system.

Then the main attack methods include: bypass attacks, inference attacks, backdoor attacks, model extraction attacks, attribution inference attacks, Trojan horse attacks, model reversal attacks, anti-watermark attacks, and reprogramming attacks.

We must clearly recognize that data security in the era of artificial intelligence also faces many new challenges. Protecting data security and protecting algorithm security has become a top priority for enterprises.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

fireflies in forest

Watch MIT’s fascinating little robot lightning bugs take flight

NASA to develop swimming robots to search for life in extraterrestrial oceans