More security flaws found in popular AI chatbots — and they could mean hackers can learn all your secrets

A lot can be picked up by listening to AI traffic, experts warn

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

If a hacker can monitor the internet traffic between their target and the target’s cloud-based AI assistant, they could easily pick up on the conversation. And if that conversation contained sensitive information - that information would end up in the attackers’ hands, as well.

This is according to a new analysis from researchers at the Offensive AI Research Lab from Ben-Gurion University in Israel, who found a way to deploy side channel attacks on targets using all Large Language Model (LLM) assistants, save forGoogleGemini.

That includesOpenAI’s powerhouse, Chat-GPT.

The “padding” technique

The “padding” technique

“Currently, anybody can read private chats sent fromChatGPTand other services,” Yisroel Mirsky, head of the Offensive AI Research Lab toldArsTechnica.

“This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Basically, in a bid to make the tool as fast as possible - the developers opened the doors to crooks picking up on the contents. When thechatbotstarts sending back its response, it doesn’t send it all at once. It sends small snippets, in the form of tokens, to speed the process up. These tokens may be encrypted, but as they’re being sent one by one, as soon as they’re generated, that allows the attackers to analyze them.

The researchers analyzed the tokens’ size, length, the sequence through which they arrive, and more. The analysis, and subsequent refinement, resulted in decrypted responses which were almost identical to the ones seen by the victim.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The researchers suggested developers do one of two things: either stop sending tokens one at the time, or fix all of them to the length of the largest possible packet, making analysis impossible. This technique, which they dubbed “padding”, was adopted by OpenAI and Cloudflare.

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

This new phishing strategy utilizes GitHub comments to distribute malware

Should your VPN always be on?

NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)