Security

GPT-5.4-Cyber: OpenAI Introduces AI Model for Cyber Defense to Counter Anthropic

OpenAI. © Unsplash
OpenAI. © Unsplash

OpenAI is expanding its cybersecurity offerings with a new, purpose-trained language model. GPT-5.4-Cyber is a variant of GPT-5.4 that has been specifically optimized for defensive security applications and is now available to selected users. The release comes at a time when competitor Anthropic is also making waves with its model Claude Mythos Preview.

What GPT-5.4-Cyber Is Designed to Do

According to OpenAI, GPT-5.4-Cyber is a version of GPT-5.4 in which restrictions have been deliberately relaxed for legitimate security work. The model is intended to enable security professionals to complete complex tasks more efficiently, without running into refusal limits that make sense for general users but can be an obstacle for professional defenders.

One of the key new capabilities is binary code analysis: security professionals can use it to examine compiled software for vulnerabilities, malware potential, and security robustness without needing access to the source code. In addition, the model is designed to support extended workflows for cyber defense, including vulnerability research, security training, and defensive programming.

Who Gets Access and How

OpenAI relies on a tiered access system within its existing “Trusted Access for Cyber” (TAC) program. Access to GPT-5.4-Cyber is not public, but is restricted to vetted security vendors, organizations, and researchers. OpenAI emphasizes that entry into the program is designed to be straightforward:

  • Individuals can have their identity verified at chatgpt.com/cyber.
  • Companies apply for access through their OpenAI account representative.
  • Those already in the TAC program who complete further authentication can register interest in higher access tiers, including access to GPT-5.4-Cyber.

Due to the elevated potential for misuse, the model is subject to special restrictions. For example, use in environments without data transparency — such as zero-data-retention configurations — is limited. OpenAI justifies this by noting that such usage scenarios lack visibility into the user, environment, and intended purpose.

The Three Core Principles of OpenAI’s Cyber Strategy

OpenAI describes its approach through three guiding principles that form the framework for GPT-5.4-Cyber and future developments:

  • Democratized access: Advanced defensive tools should be accessible to as many legitimate actors as possible, supported by clear criteria such as identity verification rather than manual case-by-case decisions.
  • Iterative deployment: Models are introduced incrementally, and safety systems are continuously improved based on real-world experience.
  • Investment in ecosystem resilience: OpenAI supports the security community through grant programs, open-source initiatives, and tools such as Codex Security.

Codex Security as a Complementary Measure

Alongside GPT-5.4-Cyber, OpenAI points to progress with Codex Security, a system for automated vulnerability analysis in codebases. Since its launch as a research preview, Codex Security has, according to the company, contributed to the remediation of more than 3,000 critical and high-severity security vulnerabilities. Through the “Codex for Open Source” program, more than 1,000 open-source projects have also been provided with free security scans.

Response to Claude Mythos from Anthropic

The release of GPT-5.4-Cyber coincides with the announcement of Anthropic’s Claude Mythos Preview, a model that, according to the company, is capable of finding and exploiting security vulnerabilities in an almost fully autonomous manner. Anthropic has not released the model publicly due to its risk potential, and instead launched “Project Glasswing,” an initiative for selected partners including Amazon Web Services, Apple, Google, Microsoft, and CrowdStrike.

With GPT-5.4-Cyber, OpenAI is pursuing a comparable but broader approach in terms of reach: rather than a closed circle of partners, the company relies on a scalable access system with identity verification that is intended to eventually encompass thousands of individuals and hundreds of teams. Both companies share the assessment that AI capabilities in the cyber domain are already significant today, and that defenders must be given preferential access before attackers gain the upper hand.

Outlook

OpenAI announces that it will continue to expand the safety mechanisms for upcoming, even more powerful models. The company anticipates that today’s safeguards are sufficient for current models, but that future generations will require more extensive defensive architectures. GPT-5.4-Cyber is to be understood as a first step in a longer-term program aimed at scaling security capabilities and protective measures in parallel.

Rank My Startup: Erobere die Liga der Top Founder!
Advertisement
Advertisement

Specials from our Partners

Top Posts from our Network

Deep Dives

© Wiener Börse

IPO Spotlight

powered by Wiener Börse

Europe's Top Unicorn Investments 2023

The full list of companies that reached a valuation of € 1B+ this year
© Behnam Norouzi on Unsplash

Crypto Investment Tracker 2022

The biggest deals in the industry, ranked by Trending Topics
ThisisEngineering RAEng on Unsplash

Technology explained

Powered by PwC
© addendum

Inside the Blockchain

Die revolutionäre Technologie von Experten erklärt

Trending Topics Tech Talk

Der Podcast mit smarten Köpfen für smarte Köpfe
© Shannon Rowies on Unsplash

We ❤️ Founders

Die spannendsten Persönlichkeiten der Startup-Szene
Tokio bei Nacht und Regen. © Unsplash

🤖Big in Japan🤖

Startups - Robots - Entrepreneurs - Tech - Trends

Continue Reading