Security researchers have introduced a new technique called HashJack, which uses the “#” part of a URL to hide malicious instructions. This portion of a link is normally ignored by servers, making the attack hard to detect. When an AI browser or assistant loads the link, it reads the hidden instructions from the fragment. This allows the attacker to control how the assistant behaves without touching the actual webpage.
The discovery was made by the Cato CTRL research team, who explained that HashJack works without altering any website. Attackers simply add hidden prompts after the “#” symbol in a URL. If a user opens that link through an AI-powered browsing tool, the model may treat those hidden prompts as real commands. This makes the attack completely indirect but still very powerful.
Because AI browsers often send the entire URL to their language model, the hidden fragment becomes assistant input. This means the attacker can instruct the AI to do things the user never requested. Researchers showed examples where the assistant could be misled into giving dangerous guidance. Even phishing-style actions and data theft become possible.
One major concern is that the webpage itself looks completely normal when opened. The malicious part exists only inside the URL, not inside the site’s content. Traditional scanning systems do not inspect the fragment portion, so they cannot detect anything wrong. Users may trust the website, not realizing the danger is coming from the link format.
Testing showed that several AI browsing tools are affected. These include well-known AI assistants that fetch full webpage context before summarizing or performing actions. Some vendors have begun applying patches after the research was shared. Still, the findings show that many current AI tools are not prepared for indirect prompt-injection methods.
HashJack also highlights the growing risks around agentic AI tools. These tools take actions on behalf of users, making them more vulnerable when given manipulated prompts. If an attacker gets the AI to follow hidden instructions, it could perform actions the user never intended. This creates new security challenges in everyday browsing.
Experts recommend that AI browser developers filter or block URL fragments before sending them to language models. They also stress the need for better validation of what information is passed to AI assistants. Organizations using AI-powered browsing should treat this as a new attack surface. Additional controls and policy checks may be necessary to stay protected.
Overall, HashJack shows how even harmless-looking links can become weapons when combined with AI-driven tools. As assistants become more common in browsers, attackers will continue to exploit overlooked areas like URL fragments. The discovery is an important reminder that AI security needs to evolve as fast as the tools themselves. Staying alert and updating defenses is essential.
Stay alert, and keep your security measures updated!
Source: Follow cybersecurity88 on X and LinkedIn for the latest cybersecurity news



