08/20/2025 / By Ava Grace
That confidential phone call about your bank details or medical records may no longer be as private as you assume. Researchers at Penn State University have demonstrated that commercially available radar technology can remotely detect and partially reconstruct smartphone conversations by capturing microscopic vibrations from a phone’s speaker — movements so small they measure just 7 micrometers, invisible and inaudible to humans. Published in the Proceedings of WiSec 2025, the study reveals how attackers could exploit this method to intercept conversations from up to 10 feet away, raising urgent concerns about digital privacy in an era of advancing surveillance.
Radar, traditionally used for tracking aircraft or assisting self-driving cars, operates by bouncing radio waves off objects and measuring their reflections. While typically employed to detect large movements, modern millimeter-wave radar can pick up minute vibrations — like those produced by a smartphone’s speaker during a call. The Penn State team’s system, dubbed WirelessTap, exploits this by analyzing the subtle tremors in a phone’s casing caused by sound waves. (Related: Amazon’s AI wearable acquisition: Convenience or surveillance?.)
Unlike hacking or breaking encryption, this method bypasses digital security entirely. It’s akin to spying on a conversation by observing ripples in a glass of water placed near a speaker. The radar only requires a direct line of sight to the phone — no malware, no network intrusion.
The vibrations alone are not enough to reconstruct full conversations. To translate the distorted signals into intelligible speech, researchers adapted OpenAI’s Whisper, an advanced AI speech recognition model. By training Whisper on synthetic and real radar-captured audio, they achieved partial transcription — enough to extract key phrases, numbers or sensitive keywords.
At close range (20 inches), accuracy reached nearly 60 percent, dropping to just 2-4 percent at 10 feet. While that may seem low, even fragmented data can be dangerous. If an attacker already knows the context of a conversation — such as a banking call — they can fill in the gaps, much like lip-readers infer missing words.
The implications are alarming. Corporate spies could eavesdrop on confidential business calls. Identity thieves might snatch credit card numbers or passwords. Unlike traditional wiretapping, this method requires no cooperation from telecom providers — just a radar device and AI software.
The Penn State team tested three smartphones (Samsung Galaxy S20, Galaxy A22 and OnePlus 9 Pro) and found varying susceptibility. Even when a user held the phone naturally, introducing movement and interference, the system still achieved 40 percent accuracy at three feet — enough to capture critical details.
While the immediate threat is limited — requiring specialized equipment and ideal conditions — the researchers warn that radar technology is becoming smaller, cheaper and more accessible. To counter this, they propose several defenses:
Manufacturers must act before this technique becomes widespread. As lead researcher Suryoday Basak noted, “By understanding what is possible, we can help the public be aware of the potential risks.”
This discovery underscores a disturbing trend: as technology advances, so do the tools for invasion. From facial recognition to AI-powered voice analysis, privacy is under siege. The Penn State study serves as a wake-up call — even encrypted calls can be compromised through physical side channels.
For now, the best defense is awareness. Avoid sensitive calls in public spaces where radar devices could be concealed. Advocate for hardware-level security improvements from smartphone makers. And remember: In the digital age, privacy is no longer guaranteed — it must be fought for.
Watch this discussion on surveillance capitalism, AI and Big Brother rising.
This video is from the Crrow777 Radio channel on Brighteon.com.
AI surveillance tech can find out who your friends are.
Google backtracks on AI ethics pledge, sparks outrage over weapons and surveillance use.
Surveillance AI detects for suicidal ideation at schools and sends police to students’ homes.
New York MTA quietly turns to AI surveillance to predict crime before it happens.
Sources include:
Tagged Under:
artificial intelligence, big government, computing, cyber war, dangerous, digital security, future tech, glitch, information technology, inventions, malware, national security, network intrusion, OpenAI, privacy watch, radar, real investigations, research, smartphone, spy tool, Spygate, surveillance, vibrations, Whisper
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2019 Dangerous.News
All content posted on this site is protected under Free Speech. Dangerous.News is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Dangerous.News assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.