AI-Powered Exploit Targets Apple’s M5 Chip, Researchers Claim

John NadaBy John Nada·May 15, 2026·7 min read
AI-Powered Exploit Targets Apple’s M5 Chip, Researchers Claim

A security firm claims to have exploited Apple's M5 chip protections using Anthropic's AI, challenging the company's robust security history.

A security firm has reportedly developed a working exploit for macOS targeting Apple’s M5 chip, leveraging Anthropic's Claude Mythos AI. This marks a significant challenge to Apple's historically robust security model, which has long been regarded as a benchmark in consumer technology security.

According to the report, the exploit was created by a small team of researchers from the Vietnam-based security startup Calif. They claimed to have built the first public macOS kernel memory corruption exploit that can bypass Apple’s new Memory Integrity Enforcement (MIE) features on M5 hardware in under a week. The exploit reportedly allows attackers to escalate privileges from an unprivileged user account to root access through a series of vulnerabilities.

The use of Anthropic's AI in this context is particularly noteworthy. Calif stated that the AI model assisted in identifying vulnerabilities and facilitating the exploit's development. This raises questions about the potential for AI to fundamentally alter the landscape of cybersecurity, making it easier to discover and exploit system weaknesses. As the AI-driven capabilities evolve, organizations must reassess their security measures against increasingly sophisticated threats, particularly as AI models like Mythos gain traction in identifying vulnerabilities across various platforms. The implications for both technology firms and regulatory bodies could be profound, as they grapple with the dual-edged nature of AI in security contexts.

In a Substack post published Thursday, Calif revealed that it developed what it describes as the first public macOS kernel memory corruption exploit capable of surviving Apple’s new Memory Integrity Enforcement, or MIE, protections on M5 hardware. The company noted that the exploit was developed in an impressively short time frame, taking less than a week to transition from the discovery of vulnerabilities to a functional exploit. This rapid development cycle highlights the effectiveness of combining human expertise with advanced AI capabilities in cybersecurity.

The researchers at Calif reported that the attack path was discovered accidentally when they found vulnerabilities on April 25 and developed a working exploit by May 1. This timeline underscores the urgency and pace at which potential threats can emerge in the cybersecurity landscape, particularly with the assistance of AI tools like Claude Mythos. The exploit chain specifically targets macOS 26 running on Apple M5 systems, and it allows attackers to escalate privileges starting from an unprivileged local user account up to root access using standard system calls.

Further details about the exploit reveal that it combines two vulnerabilities with additional techniques that specifically target bare-metal M5 hardware with kernel MIE enabled. This combination of vulnerabilities and techniques is indicative of a more sophisticated approach to exploiting system weaknesses, which could set a precedent for future attacks. The fact that the exploit targets Apple's latest chip, which is designed with enhanced security features, speaks volumes about the current capabilities of attackers in circumventing advanced security measures.

Calif stated that the Mythos Preview AI model played a crucial role in identifying these vulnerabilities and assisting throughout the exploit's development. However, the researchers were quick to emphasize that human expertise was still essential in bypassing Apple’s new MIE protections. This highlights a crucial aspect of modern cybersecurity: while AI can significantly aid in vulnerability discovery and exploit development, skilled human analysts are still vital for understanding and overcoming the complexities of advanced security technologies.

“Part of our motivation was to test what’s possible when the best models are paired with experts,” the company wrote. This synergy between AI capabilities and human knowledge is increasingly important as the complexity of digital systems continues to grow. As organizations face threats that evolve in real-time, the ability to adapt and innovate using both AI and human insight will be a decisive factor in maintaining security.

Memory corruption bugs remain one of the most common entry points for attackers seeking to breach operating systems and applications. These types of vulnerabilities can allow an attacker to crash a program, steal sensitive data, or even gain complete control over the system. Given the widespread use of Apple devices across various sectors, from individual consumers to large enterprises, the implications of such vulnerabilities are significant. Apple’s Memory Integrity Enforcement feature, which employs memory-tagging technology, aims to make these attacks much harder, but the exploit developed by Calif illustrates that no security measure is infallible.

Anthropic released the preview version of Mythos in April, following internal testing and outside evaluations that suggested the model could autonomously identify and exploit software vulnerabilities at a level beyond previous public AI models. This release is part of a broader trend in which AI is increasingly becoming integral to cybersecurity efforts. Rather than making the model publicly available, Anthropic chose to restrict access to select technology companies, banks, and researchers under its Project Glasswing initiative. This cautious approach reflects the potential risks associated with releasing powerful AI tools without appropriate oversight.

The involvement of high-profile organizations, such as the U.S. National Security Agency, which has reportedly been using Mythos despite a contentious relationship with the previous administration, further underscores the model's significance in the cybersecurity arena. Such endorsements indicate a recognition of the potential of AI to transform the way vulnerabilities are identified and addressed.

In an intriguing turn, Mozilla disclosed that Mythos identified 271 vulnerabilities in Firefox during internal testing. This statistic not only highlights the model's capabilities but also raises questions about the security of widely used software. Additionally, the U.K.’s AI Security Institute found that the model could autonomously complete sophisticated multi-stage cyberattack simulations, demonstrating its potential to mimic real-world attack scenarios.

Calif's declaration that the Apple M5 exploit offers “a glimpse of what is coming” suggests a looming landscape of increasingly sophisticated AI-driven attacks. The company noted, “Apple built MIE in a world before Mythos Preview,” indicating that the rapid development of AI capabilities may soon outpace traditional security measures. As organizations and individuals continue to rely on technology for everyday tasks, the stakes have never been higher for cybersecurity.

Industry experts are now left to ponder how effective the best mitigation technologies will be during what Calif refers to as the “first AI bugmageddon.” The term implies a future filled with challenges as AI continues to evolve and become more integrated into both offensive and defensive cybersecurity strategies. The race to stay ahead of these threats will require concerted efforts from technology companies, security researchers, and regulatory bodies alike.

The implications of these developments for technology firms are profound. Companies must now navigate a complex landscape where AI can be both a tool for enhancing security and a weapon used against them. This duality necessitates a reevaluation of existing security protocols and the adoption of more proactive measures to counteract emerging threats. As AI models like Mythos gain traction, organizations will need to invest in advanced detection and response capabilities that can keep pace with the evolving threat landscape.

Furthermore, regulatory bodies must also grapple with the challenges posed by AI in the context of cybersecurity. The rapid pace of AI development can outstrip existing regulations, leaving gaps that could be exploited by malicious actors. Policymakers will need to work closely with industry leaders to establish frameworks that promote security while fostering innovation.

As the cybersecurity landscape continues to evolve, the incident involving the Apple M5 exploit serves as a stark reminder of the vulnerabilities that persist, even in systems considered to be secure. The combination of advanced AI tools and human expertise presents a powerful new frontier in the ongoing battle against cyber threats. The need for vigilance and adaptability has never been greater as organizations strive to protect their assets in an increasingly interconnected world.

Scroll to load more articles