You may, or even have a high probability, have encountered such a situation:
When you open an app on your phone, the first thing you see is what you were talking about with your friends or family a few minutes ago. For the first time, you will be surprised. But over time, you will feel accustomed to it and take it for granted.
And one of the culprits is that the microphones (Microphones) are now ubiquitous and always “eavesdropping” on you. They are embedded in mobile phones, TVs, watches and other devices to transmit your voice to neural networks and artificial intelligence in real time. In the intelligent system, it helps the recommendation system to make a “customized” push service for you.
So, how can we avoid this from happening? There are many methods on the Internet, such as playing a completely unrelated song, turning the sound to the maximum, and actively making some noise; for example, instead of authorizing the permission to turn on the microphone for a long time, choose to use the method of one authorization at a time, etc.
Now, a team of researchers from Columbia University has come up with a new method: They have developed an artificial intelligence system that can avoid “listening” events by simply playing a very subtle sound produced by the system in the room. occur.
That is to say, this system will disguise the voice of people talking, so that it will not be heard by monitoring systems such as microphones without affecting normal conversations.
A related research paper, titled “Real-Time Neural Voice Camouflage,” has been published on the preprint site arXiv. Moreover, the researchers also said that this artificial intelligence system is easy to deploy on hardware such as computers and mobile phones, and can protect your privacy at all times.
beat AI with AI
Problems that are exploited by artificial intelligence must be solved by artificial intelligence. While the team’s work on jamming systems such as microphones is theoretically feasible in the field of artificial intelligence, it’s still a tough challenge to get it to work fast enough for practical applications.
The problem is that interfering with the microphone listening to people talking at one particular moment may not interfere with the conversation for the next few seconds. When people speak, their voices constantly change with the different words they say and the rate at which they speak, and these changes make it nearly impossible for machines to keep up with the fast pace of human speech.
In this study, the AI algorithm was able to generate an appropriate whisper by predicting the characteristics of what a person would say next, given enough time.
Carl Vondrick, assistant professor of computer science at Columbia University, said the algorithm can prevent rogue microphones from hearing (people’s) conversations with 80 percent efficiency, even when people are unaware of information such as the location of the rogue microphones. Play a role.
To do this, the researchers needed to devise an algorithm that could disrupt neural networks in real-time, which could be generated continuously as they were spoken, and which would work for a large portion of a language’s vocabulary. In previous studies, no single work can satisfy the above three requirements at the same time.
The new algorithm uses a signal called “predictive attacks” that can interfere with any word that an automatic speech recognition model is trained to transcribe. Also, when the attack sounds are played in the air, they need to be loud enough to interfere with any rogues that might be “eavesdropping” on the microphones in the distance. The attack sound needs to travel the same distance as the sound.
The AI system achieved real-time performance by predicting future attacks on signals or words, conditioned on two seconds of input speech.
At the same time, the research team also optimized the attack so that the volume is similar to normal background noise, allowing people to converse naturally in the room without being monitored by systems such as microphones.
Furthermore, they successfully demonstrated the effectiveness of this method in a real room with ambient noise and complex shapes. But Vondrick also said that the system is currently only effective for most English words, and they are applying the algorithm to more languages.
What can we do for ethical AI?
Some people say that in the era of big data, our personal information is in a state of “streaking”, and more and more smart devices around us are peeping at our personal privacy.
If this research result can be applied to more languages and landed in more scenarios in the future, it may help us avoid the use of various artificial intelligences. As Jianbo Shi, a professor in the Department of Computer and Information Science at the University of Pennsylvania, commented: They raised a new question, how can we use artificial intelligence to protect us from unconscious artificial intelligence?
At the same time, Shi also suggested that in future work, researchers need to “consciously” think about the impact of AI on humans and society from the initial design stage, and stop asking about ethical AI. What can be done for us, but what can we do for ethical AI? Once we believe in this direction, research on ethical AI will be interesting and creative.



GIPHY App Key not set. Please check settings