A new security discovery called CamoLeak revealed a major flaw in GitHub Copilot Chat that could expose private code and sensitive data. The issue was discovered by researchers from Legit Security, who demonstrated how hidden prompts could manipulate Copilot into leaking information from private repositories. The proof-of-concept attack showed how AI assistants could unintentionally become tools for data theft if not properly secured.

The attack begins with what’s known as prompt injection. In this method, an attacker hides secret instructions inside places Copilot reads, such as pull request comments or issue descriptions. When Copilot scans the repository to assist a user, it also reads these hidden commands, unknowingly following the attacker’s directions. This can make Copilot search private repositories and collect sensitive data like API keys or source code.

The most unique part of CamoLeak is how it secretly sends the stolen data back to the attacker. GitHub uses an image proxy service called Camo to handle image links safely. The attacker cleverly used this system by creating tiny image URLs where each image represented one character of stolen data. Copilot was tricked into showing these images, and as GitHub fetched them, the attacker’s server recorded which images were loaded, slowly rebuilding the stolen information.

The proof-of-concept showed that CamoLeak combined two issues hidden prompt injection and a clever bypass using the Camo proxy. Together, these allowed data to be leaked without the victim even noticing. Researchers were able to extract secrets such as AWS keys and private code snippets. Because of the potential damage, the flaw received a very high severity rating of 9.6 on the CVSS scale, marking it as a critical risk.

GitHub acted quickly after the vulnerability was reported through responsible disclosure. The company fixed the issue by disabling the image-rendering process that made the attack possible and blocking the misuse of the Camo proxy. These changes effectively closed the route that allowed data exfiltration. GitHub confirmed that the fix was rolled out in August 2025, just weeks after the vulnerability was first discovered.

Although the attack was only a proof-of-concept and not used in real-world exploitation, it highlights a serious security concern. AI assistants like Copilot can process sensitive data from repositories, which makes them potential targets for manipulation. This discovery reminds developers that even trusted AI tools can be abused through creative attacks if the underlying systems aren’t tightly controlled.

To protect against similar risks, developers and organizations are advised to avoid storing secrets or keys directly in repositories. Instead, sensitive information should be kept in secure secret managers. Teams should also rotate existing credentials, monitor access logs for strange activity, and review who can submit pull requests or comments that Copilot might read. Staying updated with GitHub’s official security guidance is also essential.

In summary, CamoLeak serves as a wake-up call for both AI tool developers and users. The attack showed that even advanced tools like GitHub Copilot can be tricked into revealing confidential data through hidden instructions. While GitHub has fixed the issue, the case emphasizes the growing need to treat AI-based assistants as part of an organization’s security boundary and defend them just as carefully as any other system.

Stay alert, and keep your security measures updated!

Source: Follow cybersecurity88 on X and LinkedIn for the latest cybersecurity news