MG Software.
HomeAboutServicesPortfolioBlogCalculator
Contact Us
MG Software
MG Software
MG Software.

MG Software builds custom software, websites and AI solutions that help businesses grow.

© 2026 MG Software B.V. All rights reserved.

NavigationServicesPortfolioAbout UsContactBlogCalculator
ServicesCustom developmentSoftware integrationsSoftware redevelopmentApp developmentSEO & discoverability
Knowledge BaseKnowledge BaseComparisonsExamplesAlternativesTemplatesToolsSolutionsAPI integrations
LocationsHaarlemAmsterdamThe HagueEindhovenBredaAmersfoortAll locations
IndustriesLegalEnergyHealthcareE-commerceLogisticsAll industries
MG Software.
HomeAboutServicesPortfolioBlogCalculator
Contact Us
MG Software
MG Software
MG Software.

MG Software builds custom software, websites and AI solutions that help businesses grow.

© 2026 MG Software B.V. All rights reserved.

NavigationServicesPortfolioAbout UsContactBlogCalculator
ServicesCustom developmentSoftware integrationsSoftware redevelopmentApp developmentSEO & discoverability
Knowledge BaseKnowledge BaseComparisonsExamplesAlternativesTemplatesToolsSolutionsAPI integrations
LocationsHaarlemAmsterdamThe HagueEindhovenBredaAmersfoortAll locations
IndustriesLegalEnergyHealthcareE-commerceLogisticsAll industries
MG Software.
HomeAboutServicesPortfolioBlogCalculator
Contact Us
MG Software
MG Software
MG Software.

MG Software builds custom software, websites and AI solutions that help businesses grow.

© 2026 MG Software B.V. All rights reserved.

NavigationServicesPortfolioAbout UsContactBlogCalculator
ServicesCustom developmentSoftware integrationsSoftware redevelopmentApp developmentSEO & discoverability
Knowledge BaseKnowledge BaseComparisonsExamplesAlternativesTemplatesToolsSolutionsAPI integrations
LocationsHaarlemAmsterdamThe HagueEindhovenBredaAmersfoortAll locations
IndustriesLegalEnergyHealthcareE-commerceLogisticsAll industries
MG Software.
HomeAboutServicesPortfolioBlogCalculator
Contact Us
All blogs

How AI Tools Created New Security Attack Surfaces: From Vercel to Claude Code

Vercel was breached through a compromised AI tool. Claude Code had RCE vulnerabilities. AI agents can steal GitHub credentials via prompt injection. Here is what changed in 2026 and how to protect your team.

Sidney
Sidney21 Apr 2026 · 13 min read
How AI Tools Created New Security Attack Surfaces: From Vercel to Claude Code

Introduction

On April 19, Vercel confirmed what many security researchers had been warning about for months. Attackers breached the company through a compromised AI observability tool called Context.ai, gaining access to internal deployments, API keys, and source code. The ransom demand was $2 million. The attack vector was not a zero-day exploit or a sophisticated code vulnerability. It was an OAuth token from an AI tool that had more access than anyone realized.

This was not an isolated incident. In the same quarter, Check Point researchers found remote code execution vulnerabilities in Claude Code. Johns Hopkins University demonstrated that AI agents from Anthropic, Google, and Microsoft could be manipulated to steal GitHub credentials through prompt injection. And Cline, the popular AI coding assistant, was briefly turned into a supply chain attack vector that published a malicious npm package. The pattern is unmistakable: AI tools have become the newest and least understood attack surface in software development.

The Vercel Breach: An AI Tool Was the Entry Point

The timeline tells the story clearly. In February 2026, an employee at Context.ai downloaded Roblox auto-farm scripts that installed Lumma infostealer malware on their machine. The malware harvested credentials for Google Workspace, Supabase, Datadog, Authkit, and critically, Vercel administrative access. By April, attackers used these stolen OAuth tokens to move laterally through Vercel internal systems.

A threat actor group calling themselves ShinyHunters claimed responsibility. They shared samples containing roughly 580 employee records and alleged access to internal deployments, NPM tokens, and GitHub tokens. Vercel stated that environment variables marked as "sensitive" remained encrypted and showed no evidence of access. But the damage to trust was already done.

What makes this breach instructive is not the malware. Infostealers are old news. The critical detail is that an AI tool, Context.ai, had been granted OAuth access broad enough to serve as a bridge into Vercel production infrastructure. Nobody audited those permissions. Nobody questioned whether an AI observability platform needed that level of access. This is the new normal: AI tools are granted permissions that would make a traditional SaaS vendor blush, and nobody is reviewing them.

Claude Code: When Your AI Assistant Has Shell Access

Check Point researchers published findings in February 2026 that should concern every team using AI coding tools. They identified three distinct attack paths in Claude Code, all exploiting the tool's deep integration with local development environments.

The first vector targeted Hooks, the event-driven automation system Claude Code uses to run scripts before and after certain actions. A malicious repository could include hook configurations that execute arbitrary shell commands the moment a developer clones the repo and opens it with Claude Code. The second vector exploited Model Context Protocol (MCP) servers. By placing a crafted MCP configuration in a repository, an attacker could redirect Claude Code to connect to a malicious server that exfiltrates environment variables, API tokens, and SSH keys. The third vector abused `.claude/settings.json` files to inject environment variables that override legitimate ones, redirecting API calls to attacker-controlled endpoints.

Anthropic patched these vulnerabilities, but the underlying architecture remains the same. AI coding tools need deep access to your file system, shell, and environment to be useful. That access is precisely what makes them dangerous when exploited. The attack surface is not a bug. It is the feature set. For teams that want to use AI coding tools safely, our vibe coding risk analysis covers the security fundamentals.

Prompt Injection: AI Agents Stealing Credentials Through Pull Requests

In April 2026, researchers from Johns Hopkins University published a technique they called "comment and control." The premise is deceptively simple. Many teams now run AI agents as part of their GitHub Actions workflows for automated code review, issue triage, or security scanning. These agents read pull request titles, descriptions, and comments as input.

The researchers demonstrated that by injecting carefully crafted prompts into a PR title or issue body, an attacker can instruct the AI agent to extract repository secrets, GitHub tokens, and API keys, then exfiltrate them to an external server. The attack requires no special infrastructure. The malicious instructions hide in plain text that looks like a normal PR description. The AI agent follows them because it cannot distinguish between legitimate instructions and injected ones.

Claude Code Security Review, Gemini CLI Action, and GitHub Copilot Agent were all confirmed vulnerable. What made the disclosure particularly concerning was the vendor response. According to the researchers, several vendors patched quietly without publishing CVEs or security advisories. Teams running older versions of these tools had no way of knowing they were exposed. This is the kind of risk that does not show up in a traditional security scanning tool comparison.

The Cline Supply Chain Attack: From AI Bot to Malicious npm Package

Snyk documented what may be the most creative AI-related attack of 2026 in February. Cline, a popular open-source AI coding assistant, runs a bot that automatically triages GitHub issues using AI. A security researcher discovered that this bot could be exploited through indirect prompt injection combined with GitHub Actions cache poisoning.

The attack chain worked like this: submit a GitHub issue with hidden prompt injection instructions. Cline's AI triage bot processes the issue, follows the injected instructions, and modifies the GitHub Actions workflow cache. On the next CI run, the poisoned cache causes the pipeline to publish a malicious version of the Cline CLI to npm. For eight hours before detection, anyone installing or updating Cline CLI could have received a package that silently installed the OpenClaw AI agent on their machine.

Eight hours is a lifetime in npm installations. The attack demonstrated that AI-powered automation in CI/CD pipelines introduces a fundamentally new class of supply chain risk. The bot that was supposed to help manage the project became the vector that compromised it.

Why AI Tools Are Different from Traditional Security Risks

Traditional third-party tools operate within defined boundaries. A monitoring service reads metrics. A logging platform ingests log data. An analytics tool tracks events. The permissions are narrow, the data flow is understood, and the attack surface is bounded.

AI tools break every one of these assumptions. An AI coding assistant needs read and write access to your entire codebase. It needs shell access to run commands. It needs network access to reach APIs. It needs access to environment variables to understand your configuration. It may need OAuth tokens for multiple services to function properly. Each of these permissions is a potential attack path.

The challenge compounds when organizations adopt multiple AI tools simultaneously. Context.ai connected to Vercel, Google Workspace, Supabase, and Datadog through a single employee account. A compromise of any one tool creates a lateral movement path to every connected service. This is not a hypothetical scenario. It is exactly what happened in the Vercel breach.

We wrote about the real costs of building AI features earlier this month. Security should be part of that cost calculation. The API bills are the easy part. The hard part is understanding the trust boundaries you are implicitly creating when you connect AI tools to your infrastructure.

A Practical Security Checklist for Teams Using AI Tools

After analyzing these incidents, we restructured our own security practices at MG Software. These are the concrete steps we now enforce on every project.

First: audit every OAuth connection. List every AI tool that has been granted access to your GitHub repositories, cloud providers, databases, or deployment platforms. For each one, verify whether the granted permissions match what the tool actually needs. Revoke anything excessive. Context.ai did not need Vercel admin access to do its job.

Second: treat AI tool configurations as untrusted input. Never clone a repository and blindly open it with an AI coding tool. Check for `.claude/settings.json`, `.cursorrules`, MCP configurations, or hook definitions before letting an AI agent process the codebase. This is the repository equivalent of checking email attachments.

Third: isolate AI agents in CI/CD. If you run AI agents as part of your GitHub Actions workflows, ensure they operate in sandboxed environments with no access to repository secrets. Use read-only tokens where possible. Audit the agent's output before allowing it to modify code or trigger deployments.

Fourth: rotate credentials proactively. Vercel recommended credential rotation after the breach. Do not wait for a breach notification. Rotate API keys, tokens, and secrets on a regular schedule. Use short-lived tokens where your infrastructure supports them.

Fifth: use hardware security keys for critical accounts. Infostealers can harvest passwords and session tokens. They cannot steal a physical FIDO2 key. For accounts that grant access to deployment infrastructure, package registries, or cloud providers, hardware keys are no longer optional. They are the minimum.

If you want help reviewing your AI tool permissions and deployment security, reach out. We audit development pipelines for exactly these kinds of risks.

What This Means for the Rest of 2026

The pattern is accelerating. AI tool adoption is growing faster than security practices can adapt. Every new AI coding assistant, AI-powered CI bot, and AI observability platform adds OAuth connections, API access, and trust relationships that expand the attack surface. The attackers are paying attention.

Expect more breaches traced back to AI tool compromises. Expect more prompt injection attacks targeting AI agents in automated workflows. Expect supply chain attacks that exploit the trust organizations place in AI-powered automation. These are not edge cases. They are the natural consequence of granting powerful tools broad access without proportional oversight.

The companies that navigate this well will be the ones that treat AI tools with the same skepticism they apply to any third-party vendor: verify the permissions, monitor the access, and plan for the compromise. The tools are powerful. Using them safely requires treating their version control and deployment integration points as first-class security concerns.

Conclusion

The Vercel breach was not a story about Vercel. It was a story about what happens when an entire industry adopts a new category of tools without updating its security model. AI coding assistants, AI-powered CI bots, and AI observability platforms are genuinely useful. They also represent the largest expansion of the software supply chain attack surface in years.

None of the incidents described here required particularly sophisticated attackers. An infostealer on a developer laptop. A prompt injected into a pull request title. A crafted configuration file in a cloned repository. These are low-complexity attacks exploiting high-trust integrations. The fix is not abandoning AI tools. The fix is applying the same rigor to AI tool security that we apply to every other part of the stack. Audit the permissions. Sandbox the agents. Rotate the credentials. And assume that every tool with broad access will eventually be targeted.

Share this post

Sidney

Sidney

Co-Founder

More on this topic

Security Scanners That Catch Vulnerabilities Before ProductionSecurity Audit Template - Free Download & ExampleWhat is API Security? A Complete Guide to Protecting Your EndpointsWhat is Row-Level Security (RLS)? Data Isolation in PostgreSQL for SaaS

Related posts

Vibe Coding: When AI-Generated Software Is Not Enough (and When It Is)
AI & automation

Vibe Coding: When AI-Generated Software Is Not Enough (and When It Is)

Vibe coding tools like Cursor, Bolt.new, and Lovable let anyone build software with AI. But 45% of AI-generated code has security flaws and founders burn thousands rebuilding what AI built wrong. Here is where the line is.

Jordan
Jordan12 Apr 2026 · 14 min read
Claude Code Source Leak: What 512,000 Lines of TypeScript Reveal About AI Coding Agents
AI & automation

Claude Code Source Leak: What 512,000 Lines of TypeScript Reveal About AI Coding Agents

On March 31, Anthropic accidentally published the complete Claude Code source code via npm. From self-healing memory to undercover mode, here is what 1,906 leaked files reveal about how modern AI coding agents work under the hood.

Jordan
Jordan1 Apr 2026 · 15 min read
What Does It Cost to Add an AI Feature to Your Product? Real Numbers from Our Projects
AI & automation

What Does It Cost to Add an AI Feature to Your Product? Real Numbers from Our Projects

Businesses want AI in their software but have no idea what it costs. We break down real API costs, development hours, and model choices from recent client projects at MG Software.

Jordan
Jordan7 Apr 2026 · 12 min read
Google Gemma 4: The Most Capable Open AI Model You Can Run Yourself
AI & automation

Google Gemma 4: The Most Capable Open AI Model You Can Run Yourself

Google DeepMind released Gemma 4 on April 2, four open-source models under Apache 2.0 that range from Raspberry Pi to datacenter scale. The 2.3B model beats its 27B predecessor. Here is what matters for developers and businesses.

Jordan
Jordan3 Apr 2026 · 10 min read
e-bloom logo
Fitr logo
Fenicks logo
HollandsLof logo
Ipse logo
Bloominess logo
Bloemenwinkel.nl logo
Plus logo
VCA logo
Saga Driehuis logo
Sportief BV logo
White & Green Home logo
One Flora Group logo
OGJG logo
Refront logo
e-bloom logo
Fitr logo
Fenicks logo
HollandsLof logo
Ipse logo
Bloominess logo
Bloemenwinkel.nl logo
Plus logo
VCA logo
Saga Driehuis logo
Sportief BV logo
White & Green Home logo
One Flora Group logo
OGJG logo
Refront logo

Want to leverage AI in your project?

We help you define and implement the right AI strategy.

Schedule an AI consultation
MG Software
MG Software
MG Software.

MG Software builds custom software, websites and AI solutions that help businesses grow.

© 2026 MG Software B.V. All rights reserved.

NavigationServicesPortfolioAbout UsContactBlogCalculator
ServicesCustom developmentSoftware integrationsSoftware redevelopmentApp developmentSEO & discoverability
Knowledge BaseKnowledge BaseComparisonsExamplesAlternativesTemplatesToolsSolutionsAPI integrations
LocationsHaarlemAmsterdamThe HagueEindhovenBredaAmersfoortAll locations
IndustriesLegalEnergyHealthcareE-commerceLogisticsAll industries