Observing the businesses that pledged to democratize intelligence covertly erect a velvet rope around it is particularly ironic. For years, OpenAI and Anthropic discussed open access, human safety, and AI for all. When April showed up, they both practically distributed a guest list.
The first action was taken by Anthropic. The lab declared on April 7th that its newest model, which it called “Mythos” internally, would not be accessible to the general public. Not to researchers, not to startups, and not even to the majority of business clients. Project Glasswing would provide early access to a select group of businesses. JPMorgan Chase was included. The majority of others didn’t. The CEO of an unidentified Asian bank was allegedly called before the board in a matter of days to respond to a single query: “How do we get in?”
| Key Information: OpenAI & Anthropic — The AI Giants at the Center of the Lockdown | Values |
|---|---|
| Company Names | OpenAI & Anthropic |
| Founded | OpenAI: 2015 / Anthropic: 2021 |
| Headquarters | San Francisco, California, USA |
| Key Models | GPT-5.5-Cyber (OpenAI) / Mythos (Anthropic) |
| Exclusive Program — OpenAI | Trusted Access for Cyber |
| Exclusive Program — Anthropic | Project Glasswing |
| Initial Access Partners | Deutsche Telekom, BBVA (OpenAI) / JPMorgan Chase (Anthropic) |
| Primary Concern | Dual-use risk: models capable of writing sophisticated malicious code |
| Key Executive — OpenAI | Sam Altman, CEO |
| OpenAI EMEA Lead | Emmanuel Marill, Managing Director |
| Regulatory Context | UK Parliament flagged delayed binding regulations (Jan 2026) |
| Latest Development | The Economist, Apr 15, 2026 — restricted model access confirmed |
It’s difficult to ignore the almost theatrical aspect of this exclusivity’s performance. The name “Project Glasswing” seems almost overly intentional; it is delicate, lovely, transparent, and guarded. However, the issue is genuine behind the branding. People familiar with Mythos’ capabilities claim that it can produce sophisticated malicious code with what insiders describe as little prompting. It’s not a small footnote. Cybersecurity experts put down their coffee when they read a sentence like that.
A few European companies, including Deutsche Telekom and BBVA, were given privileged access to GPT-5.5-Cyber through OpenAI’s own version of controlled distribution, the Trusted Access for Cyber program. OpenAI’s Managing Director for EMEA, Emmanuel Marill, carefully phrased it, emphasizing the need to make sure reliable defenders aren’t using antiquated tools to combat contemporary threats. It’s a valid point. It’s also timed conveniently.
There is a feeling that both businesses are simultaneously managing perception and real risk. Limiting access to the most potent models produces a controlled scarcity that is both highly exciting for the market and appears responsible. Every other bank begins to wonder what they’re missing when only JPMorgan is able to enter. It’s difficult to create and harder to ignore that kind of envy.

Since OpenAI’s founding, a deeper tension has been developing. The company was founded on the idea that AI could change civilization and that the person in charge of it would have extraordinary responsibility. Former OpenAI chief scientist Ilya Sutskever reportedly thought that having too much control over this technology by the wrong person was dangerous in and of itself. Years later, the issue is not only who owns the model but also who has access to it. That is a more subdued but possibly more significant form of the same problem.
As you watch this develop, the speed at which access has become the new currency is more impressive than the technology itself. There is a long-standing asymmetry in cybersecurity circles: defenders must seal every crack, while attackers only need to find one. Theoretically, these models could aid in reducing that difference at machine speed. But only for the businesses that have already received an invitation. The others are still patching by hand.
It’s still unclear if this strategy—curated deployments, vetted partners, and selective access—will truly reduce the risks that both businesses say they are afraid of. History indicates that information rarely remains contained for very long after it is created. The era of AI as a public utility, which has always been somewhat idealistic, appears to be fading. It is a real fortress. Additionally, the gate is becoming smaller.
