The AI tool in your stack is now an attack vector: Vercel, Context AI, and what CISOs need to do this week
Your sanctioned AI tools now hold the OAuth scopes attackers most want. Close the gap in 90 days or learn about the pattern from an incident.
TL;DR
- On 20 April, Vercel confirmed a breach. The entry point was not Vercel — it was Context AI, a third-party AI assistant connected to an employee's Google Workspace via OAuth.
- Attackers used the compromised Context AI token to pivot into Gmail/Drive, then into internal systems, exfiltrated API keys and selected source code, and listed the assets on criminal forums under the ShinyHunters brand within 72 hours.
- The pattern is new, the control gap is large, and the response window is short. Your allow-list does not help — you allow-listed the good guys, and the compromise happened at the vendor.
- The fix is not a tool purchase. It is an OAuth-scope inventory today, an allow-list policy this week, and AI tools added to the third-party risk register this month.
- Expect this attack to be automated inside 90 days. Small AI vendors with broad enterprise OAuth grants are now the soft underbelly of the Fortune 500.
What happened
On 20 April, Vercel confirmed a breach. The entry point was not Vercel itself. It was Context AI, a third-party AI assistant whose OAuth connection to a Vercel employee's Google Workspace account had been compromised by attackers.
Once the OAuth token was in hand, the attackers:
- Pivoted from Context AI's granted scopes into the employee's Gmail and Drive.
- From there, accessed internal Vercel systems using credentials and session artefacts found in the employee's workspace.
- Exfiltrated API keys, selected source code, and database connection strings.
- Listed the stolen assets for sale on criminal forums under the ShinyHunters brand within 72 hours.
Vercel is credible, fast, and transparent on incident response. Affected customers have been notified. Key rotation is in progress. That is the vendor side.
The pattern is the story.
Why this is different from a normal OAuth phish
For fifteen years, the OAuth phish pattern has been: user is tricked into granting a malicious app broad Google scopes; attacker reads mail, exports contacts, moves on. The mitigation stack is mature — Workspace admin controls, third-party app allow-lists, OAuth app review.
The AI tool pattern breaks that stack in three places.
First, the apps are not malicious. Context AI is a legitimate product. The compromise happened to Context AI itself, and the scopes it holds across its customers' tenants are now effectively attacker-accessible. Your allow-list does not help you. You allow-listed the good guys.
Second, the scopes are broad by design. An AI assistant that helps with email, calendar, documents, and Drive requires broad read and often write access across all of them. These are the scopes attackers want most. The business value of the tool and the attack surface it creates are the same thing.
Third, the tools are proliferating below the IT radar. Context AI, Glean, Dust, Mem, a dozen Copilot-style agents — individual employees install them with personal credentials, often without IT approval, across their work Google or Microsoft accounts. The average enterprise does not have a current inventory.
The scope of the exposure, honestly
This is not hypothetical. The specific breach this week may have affected only Vercel. The pattern affects:
- Every organisation where an employee has connected an AI tool to their work Google Workspace or Microsoft 365 account.
- Every organisation whose SaaS tools (Notion, Linear, Slack, Salesforce) have AI features with OAuth scopes.
- Every organisation using a connected AI coding assistant (Cursor, Codeium, GitHub Copilot Enterprise) with repository access.
If you have not already run an inventory of AI tools holding OAuth scopes in your identity provider, you are flying blind. The ShinyHunters posting tells you at least one mid-market attack team is now running this playbook. Others will follow within weeks.
The control gap
Blind spot one — third-party AI vendor security posture. Most CISOs do not ask AI tool vendors the same security questionnaire they ask SaaS vendors. The vendors are smaller, newer, often Series A or B, and their security practices typically lag. Context AI is not unusual. It is typical.
Blind spot two — OAuth scope observability. Google Workspace and Microsoft 365 both let you see granted scopes. Neither makes it easy to monitor what those scopes are actually being used for, by which apps, how often, from which IP ranges. The telemetry is thin. Attackers know this.
What to do this week
Today
- Pull an inventory of OAuth-connected third-party apps from your Google Workspace or M365 admin console. Filter for any app with Drive, Mail, Calendar, or full profile scopes.
- Identify the AI-category tools in that list. Cross-reference against a list of vendors with a security attestation (SOC 2 Type II at minimum, ISO 27001 preferably). Flag anything without.
- Check whether anyone in your organisation uses Context AI specifically. If yes: revoke the OAuth grant, rotate credentials for anything that employee touched, review Workspace login and Drive access logs for the relevant employees for the past 30 days.
This week
- Implement an OAuth app allow-list policy if you don't have one. Default-deny for new AI-category third-party apps until reviewed. You will get complaints. You will get fewer complaints than you will get from a breach.
- Require step-up authentication (hardware key or passkey, not SMS) for any account that has granted broad OAuth scopes.
- Rotate API keys for any AI tool that has write access to production systems. If you don't know which tools that is, that is itself the finding.
This month
- Add AI tools to your third-party risk register as a distinct category. The existing SaaS vendor review is not sufficient — the scope pattern is different.
- Require security questionnaires from every AI vendor you use, specifically asking: (a) token storage architecture — are OAuth tokens stored encrypted with per-customer keys, (b) employee access controls to customer tokens, (c) incident response SLA, (d) SOC 2 audit status.
- Define your AI-tool approval workflow. Employees will keep adopting these tools; you need a path that is faster than six weeks and safer than whatever they want.
- Brief your audit committee. This class of risk is not in most committees' mental model yet. Put it there before it arrives as an incident.
Inside the quarter
- Evaluate OAuth monitoring platforms (Nudge Security, Reco, Grip, AppOmni). The category has matured; unit economics have improved.
- Include this attack pattern in your next tabletop exercise.
- Update your acceptable use policy. Most AUPs written pre-2024 do not cover employees connecting AI tools to work accounts with personal credentials.
The hype to deconstruct
Expect vendor marketing in the next 60 days framing this as "the AI security crisis." It is not a crisis — it is a control gap, solvable with an inventory, an allow-list, and a vendor questionnaire. Anyone selling you an AI security platform for $200k/year to solve it is pricing the fear, not the fix. The genuinely useful tools in this space are OAuth-observability platforms (already-mature category), not new AI-specific products. Buy the mature category.
The second-order risks to watch
- Automation. ShinyHunters and peers will write scanners targeting AI tool vendors' employees rather than the primary target's employees. Small AI vendors with broad enterprise OAuth grants are the soft underbelly of the Fortune 500.
- Regulatory attention. Expect the EU AI Act and US equivalents to add security attestation requirements for AI tools holding enterprise data scopes. Incumbents will comply. Smaller vendors may fold.
- Cyber insurance. Any policy renewal in H2 2026 will likely include new questions about AI tool inventory. Prepare now.
Cross-layer implications
This is not only a security story. It reaches into procurement (AI vendor due-diligence templates need new sections), into HR and acceptable-use policy (employees connecting personal AI tools to work accounts is now material), into M&A diligence (acquirers should expect an AI-tool-inventory gate), and into board-level cyber governance (the risk category needs to be named on the dashboard, not buried under third-party risk).
What this means for you
- CISO: this belongs in the next board cyber update. Not as an emergency — as a pattern requiring a policy response inside 90 days. Executive framing: The AI tools our people use now hold the scopes attackers most want. We are closing that gap.
- IAM owner: critical path. The OAuth scope inventory and monitoring work is yours. Budget for tooling or headcount accordingly.
- Engineering team with AI coding tools: repository scope is the biggest single exposure. A breached AI assistant with repo access gets source code and secrets in one pull. Enforce read-only scope where possible; require separate service accounts for write; rotate tokens on a schedule.
- Individual employee with AI tools connected to your work account: review connected apps today. Google: Account → Security → Third-party apps with account access. Microsoft: My Apps → Manage apps. Revoke anything you don't actively use. Assume every connected AI tool is a potential front door.
- Founder building an AI product holding enterprise OAuth scopes: your security story is now a product feature. If you cannot articulate — in plain language, to a mid-market CISO, in under five minutes — how customer tokens are stored, who inside your company can access them, and what happens when an employee leaves, you will lose procurement cycles this quarter.
Uncertainty ledger
- Full scope of the Context AI compromise beyond the Vercel pivot. If Context AI held OAuth tokens for other enterprises — likely — further disclosures are expected.
- Whether attackers had persistent access or a snapshot. Persistent access would make remediation meaningfully harder.
- Vercel's full incident timeline and the specific data exfiltrated. Disclosure is appropriately careful but incomplete for an active investigation.
- Whether stolen credentials sold on criminal forums have been used yet. As of this briefing, no downstream incidents have been publicly tied to the sale.
Bottom Line
The AI tool in your stack is not maybe an attack vector. It is one — this week demonstrated the pattern with a tier-one infrastructure target and a name-brand attack group. The fix is unglamorous and it is known: inventory, allow-list, vendor questionnaire, board brief. Every CISO who runs it in the next 90 days is ahead of the curve. Every one who waits for the regulator will do it in a hurry, badly, under incident pressure, for the same cost.
Written in the tradition of — E.
Sources
- Tier 1: Vercel — security advisory and customer notification (20–22 April 2026); US CISA — OAuth security guidance (current); Google Workspace and Microsoft 365 — third-party app security documentation
- Tier 2: TechCrunch — Vercel breach via Context AI reporting (20 April 2026); SecurityWeek and The Hacker News — incident coverage (April 2026)
- Tier 4 (contextual only): ShinyHunters criminal forum posting (observed 20–22 April 2026)