HN
Today

Billing can be bypassed using a combo of subagents with an agent definition

A clever exploit in Microsoft Copilot has been revealed, allowing users to access expensive premium LLMs like Claude Opus 4.5 for free by chaining subagents. This billing bypass, initially dismissed by Microsoft's security team as 'outside scope,' leverages the system's internal logic and free base models to bypass charges. The Hacker News community is now dissecting Microsoft's perceived quality issues, the future of LLM billing models, and the growing prevalence of 'AI slop' in open-source contributions.

102
Score
48
Comments
#1
Highest Rank
8h
on Front Page
First Seen
Feb 8, 5:00 PM
Last Seen
Feb 9, 12:00 AM
Rank Over Time
15141519242526

The Lowdown

A detailed GitHub issue exposes a significant billing vulnerability within Microsoft Copilot, enabling users to bypass premium model usage fees. The core of the exploit lies in orchestrating interactions between free base models and subagents designed to call premium models, all while incurring no premium charges.

  • The bypass leverages several aspects: subagents and tool calls not consuming 'requests,' request costs being calculated on the initial model, the inclusion of 'free' models like GPT-5-mini, and the ability to define agents with specific models.
  • The first example demonstrates setting a chat to a free model, then instructing it to launch a subagent that uses a premium model (e.g., Opus 4.5), thereby getting premium output for free.
  • A second, more complex vector for abuse involves setting chat.agent.maxRequests high, using a premium model initially, and scripting tool calls to create a loop that repeatedly invokes the premium model without additional cost.
  • The author successfully used this to run Opus 4.5 subagents for over 3 hours, processing hundreds of files while consuming only 3 premium credits.
  • The issue also points out that message 'types' are client-declared, suggesting a lack of API validation and another potential vector for abuse.
  • Notably, Microsoft's Security Response Center (MSRC) initially rejected the report, stating that 'bypassing billing is outside of MSRC scope,' instructing the author to file it as a public bug report.

This vulnerability highlights potential flaws in how AI service providers account for complex multi-agent interactions, raising questions about the robustness of their billing infrastructures and the scope of their security assessments.

The Gossip

Microsoft's Misses & MSRC's Myopia

Many commenters expressed dismay and sarcasm regarding Microsoft's handling of the vulnerability, particularly MSRC's initial dismissal of a billing bypass as 'outside scope.' This led to a broader discussion about a perceived decline in Microsoft's product quality, with some users stating Microsoft has been 'phoning it in' or 'shipping trash for 15 years,' citing issues from Azure reliability to internal development practices. Others compared this billing loophole to Microsoft's historical tolerance of software piracy, suggesting a long-standing strategy to dominate market share, even at the cost of immediate revenue.

Agentic Abuses & Billing Blunders

The technical intricacies of the bypass sparked conversation about the nature of LLM billing and agentic AI. Commenters questioned the fundamental design where costs are attributed, noting the cleverness of using a 'free' orchestrator to spawn 'premium' work. The discussion touched on whether this is a true prompt injection or a more subtle 'edge case in billing that isn't attributing agent calls correctly,' drawing parallels to 'in-band signaling' vulnerabilities seen in older systems like phreaking or SQL injection. Some also noted that business logic and guardrails for these AI agents are often implemented locally, making client-side manipulation feasible.

Slop & Open Source Suffers

A recurring theme was the frustration with the influx of low-quality, often AI-generated, 'contributions' or 'slop' in open-source projects, particularly on GitHub. Commenters lamented the rise of 'vibe engineers' who 'contribute' vague or unhelpful comments, sometimes by simply pasting AI outputs, making it harder for maintainers to discern genuine issues. The specific incident of the initial issue being auto-closed by a bot, and the massive number of open PRs/issues on Microsoft's repos (leading to aggressive auto-closing policies), was cited as evidence of this growing problem, leading some to understand why projects might move to 'whitelist-contributors-only' modes.

To Report or Not to Report?

A humorous, yet thought-provoking, sub-theme emerged questioning the author's decision to report the exploit. Given that it allowed free access to premium models, some commenters sarcastically asked 'Why would you report this?!' and suggested simply enjoying the 'free ride.' This highlights the ethical dilemma faced by users who discover such loopholes, weighing the benefits of free service against the responsibility of reporting vulnerabilities.