New AI Model Can Steal Data by Listening to Keystrokes Recorded by a Nearby Phone

Researchers from the United Kingdom have developed an innovative deep learning model with the ability to steal data by listening to keystrokes recorded by a nearby phone. This groundbreaking AI model demonstrates an impressive accuracy rate of 95% in capturing sensitive information. To further test its capabilities, the researchers conducted a Zoom call experiment, where the model achieved an accuracy rate of 93%.

To conduct their study, the researchers delved into the realm of acoustic side-channel attacks on keyboards. They discovered that wireless keyboards emit detectable electromagnetic signals, but keystroke sounds are more prevalent and easier to interpret. Therefore, they focused on analyzing keystroke acoustics, particularly on laptops, which are commonly used in public areas and emit similar signals due to their non-modular nature.

The team introduced self-attention transformer layers to the keyboard attack scenario for the first time. Real-world scenarios were utilized to assess the effectiveness of this new attack technique. The researchers specifically evaluated the attack on laptop keyboards in close proximity to the attacker’s microphone, using a mobile device. They also tested the attack on laptop keystrokes during a Zoom call.

Surprisingly, the setup process required nothing more than an iPhone microphone and training the AI model using keystrokes. This simple approach underscores the potential vulnerability of passwords and classified data, as they can be compromised without the need for specialized equipment.

Going forward, the researchers aim to enhance their techniques for extracting individual keystrokes from a single recording. This is essential for accurate keystroke classification in side-channel attacks. Additionally, they are exploring the use of smart speakers to record keystrokes, leveraging their “always-on” feature present in many households.

The research outcomes are remarkable, showcasing the model’s ability to accurately capture keystrokes recorded by a nearby phone with a 95% accuracy rate. The results reaffirm the practicality of side-channel attacks using readily available equipment and algorithms.


Q: What is a side-channel attack?
A: A side-channel attack is a hacking technique where an attacker gains unauthorized access to data by analyzing information leaked via indirect means, such as power consumption, electromagnetic signals, or sound.

Q: What are keystroke acoustics?
A: Keystroke acoustics refer to the analysis of sounds generated by keyboard strokes. Each key produces a distinct sound, which can be used to infer the corresponding keystroke.

Q: How does the AI model steal data?
A: The AI model steals data by listening to keystrokes recorded by a nearby phone. It analyzes the sounds produced by the keystrokes and accurately captures the information being typed.

Q: Can this AI model be used for malicious purposes?
A: While the researchers’ intention was to highlight a potential vulnerability, there is a concern that this AI model could be utilized for malicious purposes. It emphasizes the importance of protecting sensitive information and being cautious about potential security risks.