AI News April 11, 2026: Mythos, Muse Spark, Terafab & More

AI News April 11, 2026: Mythos, Muse Spark & Terafab

The week ending April 11, 2026, produced the kind of AI news that forces the industry to recalibrate its assumptions about what frontier models can actually do. Anthropic stunned the technology world by unveiling Claude Mythos Preview, a general-purpose model the company describes as its most capable to date, and then immediately restricting it to a closed group of cybersecurity partners.

That decision, which marks the first time in nearly seven years that a leading AI lab has publicly withheld a model due to safety concerns, reframes what responsible deployment looks like when an AI system can autonomously discover and exploit zero-day vulnerabilities across every major operating system and web browser. Anthropic’s own testing found that the vulnerabilities Mythos Preview identified had, in some cases, survived decades of human review and millions of automated security tests. The implications stretch far beyond Anthropic’s own roadmap.

Meta’s Superintelligence Labs debuted Muse Spark, the company’s first proprietary frontier model and the most direct evidence yet that Alexandr Wang’s mandate from Mark Zuckerberg is producing results. Muse Spark scored 52 on the Artificial Intelligence Index, placing it behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6, a dramatic leap from Llama 4 Maverick’s score of 18 just one year ago. Meanwhile, on the infrastructure side, Anthropic disclosed a revenue run rate that has tripled in roughly four months, and Intel’s partnership with Elon Musk’s Terafab project injected fresh momentum into the U.S. domestic semiconductor conversation.

The throughline across this week’s biggest stories is acceleration without equivalently distributed access. Mythos is real, but restricted. Muse Spark is live, but primarily inside Meta’s own product ecosystem. Anthropic’s compute deal with Google and Broadcom will not come online until 2027. The gap between what the most advanced labs are building and what developers and enterprises can actually use is widening, not shrinking, and that tension will define the next phase of AI’s commercial and regulatory arc.

Anthropic Unveils Claude Mythos Preview via Project Glasswing

Claude Mythos Preview represents what Anthropic calls a watershed moment for cybersecurity: a general-purpose frontier model that has already identified thousands of zero-day vulnerabilities, many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software. Announced on April 7 that the model was not released publicly. Instead, Anthropic launched Project Glasswing, a controlled initiative that gives 12 partner organizations, including Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, Nvidia, and Palo Alto Networks, access to the model exclusively for defensive cybersecurity work. Beyond those 12 partners, 40 additional organizations will receive Mythos Preview access, and Anthropic has committed up to $100 million in usage credits to support the initiative.

The benchmark data published in Anthropic’s 244-page system card puts the capability gap into stark relief. Mythos Preview achieved 93.9% on SWE-bench Verified, 77.8% on SWE-bench Pro, 82% on Terminal-Bench 2.0, and 97.6% on USAMO 2026, each representing a double-digit lead over Claude Opus 4.6. The cybersecurity findings are more sobering than the benchmark numbers alone suggest. In one documented case, Mythos Preview fully autonomously identified and then exploited a 17-year-old remote code execution vulnerability in FreeBSD that allows anyone to gain root on a machine running NFS, triaged as CVE-2026-4747. In another, it wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray.

The downstream consequence of this announcement extends well beyond Anthropic’s partner network. Within 48 hours of the disclosure, Treasury Secretary Bessent and Federal Reserve Chair Powell assembled the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs at Treasury headquarters to ensure banks understood the cyber risks Mythos and similar future models present. That a frontier AI announcement triggered an emergency financial sector briefing involving the Fed and Treasury underscores the degree to which AI capabilities have moved from a technology story into a systemic risk conversation. For competitors, the question is now whether a safety-first withholding strategy can coexist with commercial pressure to ship.

Source: Anthropic | https://www.anthropic.com/glasswing

Meta’s Muse Spark Signals a Proprietary Pivot by Zuckerberg

Muse Spark is a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration, and the first product of a ground-up overhaul of Meta’s AI efforts under Meta Superintelligence Labs. Announced on April 8, the model arrives roughly nine months after Alexandr Wang joined Meta as chief AI officer, following the company’s $14.3 billion investment in Scale AI for a 49% stake. The release represents a calculated break from the open-weight Llama family that had defined Meta’s AI identity for years. Muse Spark is proprietary, though the company said it hopes to open-source future versions, a framing that leaves Meta’s developer community in an ambiguous position.

The business rationale for the shift becomes clearer when examining what Muse Spark is actually designed to do. The company is also experimenting with a new revenue stream by offering third-party developers access to Muse Spark’s underlying technology via a private API preview, with plans to eventually offer paid access to a wider audience. That commercial intent, which mirrors the monetization architecture of OpenAI and Anthropic, signals that Zuckerberg has concluded that free, open-weight models cannot sustain the infrastructure investment Meta now requires. Meta said its AI-related capital expenditures in 2026 will be between $115 billion and $135 billion, nearly twice its capex from the prior year.

Independently validated benchmarks from Artificial Analysis suggest Muse Spark is genuinely competitive in several categories. The model scores 80.5% on MMMU-Pro, making it the second-most capable vision model they have benchmarked, behind only Gemini 3.1 Pro Preview at 82.4%. However, agentic performance and coding are acknowledged gaps, with Meta stating it continues to invest in those areas. For developers who built workflows on Llama’s open weights, the pivot creates immediate procurement uncertainty. For advertisers, whose targeting revenue funds Meta’s entire AI bet, Muse Spark’s visual reasoning capabilities and consumer integration across Instagram, WhatsApp, and Facebook represent the real payoff.

Source: Meta | https://about.fb.com/news/2026/04/introducing-muse-spark-meta-superintelligence-labs/

Anthropic’s Revenue Surges to $30B Run Rate as Compute Deal Expands

Anthropic has signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity expected to come online starting in 2027, describing it as the company’s most significant compute commitment to date. The announcement, made on April 6, came bundled with a revenue disclosure that reframes Anthropic’s position in the AI market. The company’s revenue run rate has surpassed $30 billion, up from approximately $9 billion at the end of 2025, with more than 1,000 business customers now spending over $1 million annually, a figure that has more than doubled since February.

The infrastructure arithmetic behind the deal is significant. A Broadcom SEC filing shows the agreement includes 3.5 gigawatts of compute, an expansion of the deal the companies struck in October 2025 for more than one gigawatt of capacity, with the majority of the new infrastructure to be housed in the U.S. Anthropic CFO Krishna Rao described the move as the company building capacity to serve exponential growth while also enabling Claude to define the frontier of AI development. The deal also serves Broadcom’s strategic interests directly: following the announcement, Broadcom’s shares surged 8.5% in a single trading session, with analysts revising 2026 AI revenue targets upward and many expecting the company to exceed $25 billion in AI-specific sales by year’s end.

The scale of this compute commitment points to something that casual observers of Anthropic’s funding trajectory sometimes miss: the company is not merely growing revenue, it is building the physical infrastructure to sustain model generations that do not yet exist commercially. Dario Amodei has publicly outlined a vision of 100 gigawatts of compute capacity across the AI industry by 2028 and 300 gigawatts by 2029. The Google-Broadcom deal is a building block toward that target, and it positions Anthropic as the only frontier lab with a multi-cloud deployment footprint across AWS Bedrock, Google Vertex AI, and Microsoft Azure simultaneously.

Source: Anthropic | https://www.anthropic.com/news/google-broadcom-partnership-compute

Intel Joins Musk’s Terafab as Foundry Partner in $25B Chip Megaproject

Intel will join SpaceX and Tesla in an effort to build a new U.S. semiconductor factory in Texas, with the company stating its ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 terawatt per year of compute to power future advances in AI and robotics. The partnership, announced April 7, gives Intel the marquee anchor customer its foundry business has been seeking since the company pivoted to an external foundry model under IDM 2.0. Intel will contribute its 18A process node, a 1.8-nanometre-class process that represents the most sophisticated semiconductor capability manufactured entirely within the United States.

The market responded decisively. Intel closed at $58.95 on April 8, up 11.42%, with trading volume reaching 179.7 million shares, about 64% above its three-month average. The Terafab facility is being constructed on the north campus of Giga Texas in Austin and encompasses both a Terrestrial Fab for Tesla AI chips targeting humanoid robotics and autonomous vehicles, and an Orbital Fab for radiation-hardened semiconductors supporting SpaceX’s satellite-based AI data centers. Critically, the SpaceX and xAI merger, completed in February 2026, created a combined entity valued at approximately $1.25 trillion and was a central catalyst for the partnership, with estimates suggesting 80% of Terafab’s compute output is directed toward orbital infrastructure.

Skepticism remains warranted. Bernstein Research estimated the true capital required to hit one terawatt of annual compute at approximately $5 trillion, more than 70% of the total annual U.S. federal budget, against the stated $25 billion project budget. Independent analysts have questioned whether Intel’s 18A node can deliver at the yields and volumes the project envisions. What is clear, however, is that this partnership hands Intel a geopolitical and commercial narrative it badly needed: advanced American chips, manufactured on American soil, for the AI and robotics systems that are defining the next decade.

Source: TechCrunch | https://techcrunch.com/2026/04/07/intel-signs-on-to-elon-musks-terafab-chips-project/

OpenAI Launches Safety Fellowship Amid New Yorker Investigation

OpenAI announced the OpenAI Safety Fellowship on April 6, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems, running from September 14, 2026, through February 5, 2027. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and high-severity misuse domains. Fellows receive a $3,850 weekly stipend and approximately $15,000 in monthly computing resources, with applications closing May 3 and fellows notified by July 25.

The timing of the announcement landed with considerable irony. Hours after OpenAI posted the fellowship, Ronan Farrow’s investigation in The New Yorker documented that OpenAI had dissolved three consecutive internal safety organizations over 22 months: the Superalignment team in May 2024, the AGI Readiness team in October 2024, and the Mission Alignment team in February 2026 after just 16 months, while the word “safely” was deleted from OpenAI’s mission statement in its IRS filings. The juxtaposition of an external fellowship announcement with the dismantling of internal safety infrastructure raises substantive questions about whether the program represents meaningful investment in alignment research or a reputational repositioning exercise.

The competitive dynamic here is not trivial. Both OpenAI and Anthropic now offer near-identical fellowship compensation packages, suggesting industry-standard norms for attracting external safety researchers have solidified around the $3,850 weekly stipend and $15,000 compute allocation figures. For AI safety researchers weighing where their work will have the most institutional impact, the structural difference between a program that works alongside intact internal safety teams versus one operating at arm’s length from a company that has dissolved its equivalent internal capacity is significant. That distinction will shape how seriously the research community engages with OpenAI’s fellowship invitation.

Source: OpenAI | https://openai.com/index/introducing-openai-safety-fellowship/

Google Integrates NotebookLM Directly into Gemini App

Google has fully integrated NotebookLM into the Gemini app, giving users a project base that connects the Gemini chat interface with its AI-powered research partner for a seamless workflow. Google announced on April 10 that the integration allows users to create and access research notebooks directly from Gemini’s side panel, eliminating the friction of switching between two separate Google applications. Users can select various sources, including PDFs, documents, website URLs, and videos to include in their notebooks, and NotebookLM then generates outputs such as reviewers, infographics, and audio and video overviews based on the uploaded content.

The strategic logic of this integration is straightforward: NotebookLM has become one of Google’s most praised AI productivity tools since its debut, and embedding it inside Gemini dramatically expands the surface area where that capability can be encountered. For enterprise users already deep in Google Workspace, the combination of Gemini’s 2-million-token context window and NotebookLM’s source-grounded synthesis capability creates a research workflow that is qualitatively different from what standalone chatbot access provides. The enhanced NotebookLM can generate study guides, infographics, and audio and video overviews from uploaded sources, transforming complex information into digestible formats.

For Google, the competitive pressure motivating this move is clear. OpenAI’s Deep Research feature and Anthropic’s file-handling capabilities within Claude have set new expectations for how AI assistants handle source-rich research tasks. Embedding NotebookLM into Gemini is Google’s answer: rather than building a single best-in-class research mode, it is integrating its two strongest research-adjacent products into one coherent experience, betting that its Workspace ecosystem lock-in will convert existing Google users into heavier AI product consumers.

Source: Google | https://blog.google/innovation-and-ai/products/gemini-app/notebooks-gemini-notebooklm/

AI-Attributed Tech Layoffs Surpass 37,600 in Q1 2026

78,557 workers in the tech industry were reportedly laid off from January 1 through April 2026, with more than 76% of affected positions located in the U.S., and approximately 37,638 of those cuts, representing 47.9%, were attributed to AI implementation and workflow automation.The figure, published by Nikkei Asia, citing RationalFX analysis, is the most comprehensive accounting to date of how artificial intelligence is directly reshaping the technology workforce. Among firms tracking this trend, Challenger, Gray and Christmas found that AI was cited as the leading reason for tech layoffs in March 2026, accounting for 25% of stated reasons, up from just 10% in February.

The sectoral breakdown reveals where displacement is most concentrated. Customer support and content creation roles have absorbed the heaviest cuts, as AI systems demonstrating the ability to resolve 70 to 80% of customer inquiries without human intervention have made large support teams difficult to justify at current headcount levels. The SHRM State of AI in HR 2026 report found that AI is 5.7 times more likely to shift job responsibilities than to eliminate jobs outright, a framing that captures a different and often undercovered dimension of the disruption: workers who retain their titles but face fundamentally different job definitions and higher productivity expectations with no corresponding wage adjustment.

The workforce data carries an important counternarrative. IBM has reportedly tripled its entry-level hiring in 2026, reasoning that while AI can perform many entry-level tasks, cutting those roles eliminates the pipeline needed to develop future experienced workers and mid-level managers. For enterprises weighing short-term cost reduction against long-term talent attrition, IBM’s position represents a contrarian data point that challenges the simplest version of the AI-eliminates-jobs narrative. The more nuanced reality, as the Q1 data suggest, is that AI is simultaneously destroying some roles, reshaping many more, and creating genuine demand for workers who can operate alongside these systems.

Source: Tom’s Hardware | https://www.tomshardware.com/tech-industry/tech-industry-lays-off-nearly-80-000-employees-in-the-first-quarter-of-2026-almost-50-percent-of-affected-positions-cut-due-to-ai

Claude Mythos Security Findings Trigger Government and Banking Response

The governance dimension of the Mythos announcement was as significant as the technical disclosure. Anthropic did not simply release benchmark scores and leave regulators to catch up. Anthropic briefed senior U.S. government officials and industry stakeholders on Mythos Preview’s capabilities ahead of its release, and in a blog post stated it is willing to work with officials at all levels of government to ensure national security is a priority when rolling out new AI models. That proactive posture contrasts with how previous frontier model releases have been handled across the industry, where safety documentation has typically followed capability claims rather than preceded them.

Anthropic committed up to $100 million in Mythos Preview usage credits to Project Glasswing partners, and noted it has engaged in ongoing discussions with federal officials about the model’s use. The cybersecurity disclosures themselves were handled through coordinated responsible disclosure: Anthropic contracted professional security contractors to manually validate every bug report before sending it to open source maintainers and closed source vendors, and found that in 89% of the 198 manually reviewed vulnerability reports, expert contractors agreed exactly with the severity assessment Claude had assigned.

What makes the government’s response most telling is its speed. The Treasury and Fed convening financial sector CEOs within 48 hours of a frontier model announcement is unprecedented. It confirms that policymakers have absorbed the lesson that AI capability announcements can have systemic risk implications that do not wait for formal regulatory processes. For the broader AI governance conversation, Project Glasswing may end up serving as a reference model for how labs should handle capability disclosures in domains such as cybersecurity, bioweapons, and chemical synthesis, where dual-use risks are immediate and adversarial actors can move faster than regulation.

Source: Fortune | https://fortune.com/2026/04/10/bessent-powell-anthropic-mythos-ai-model-cyber-risk/

Muse Spark Benchmark Data Shows Meta’s Technical Resurgence

Muse Spark achieved 58% in Humanity’s Last Exam and 38% in FrontierScience Research when operated in Contemplating mode, which orchestrates multiple agents that reason in parallel to compete with the extreme reasoning modes of frontier models such as Gemini Deep Think and GPT Pro. Those numbers, published in Meta’s technical blog alongside the model’s April 8 release, reflect a nine-month engineering sprint that rebuilt Meta’s AI stack from the ground up. For a company whose previous frontier model attempt, Llama 4, was widely described as a disappointing benchmark manipulator, Muse Spark represents a genuine recalibration of Meta’s AI standing.

Artificial Analysis scored Muse Spark at 52 on their Intelligence Index, ahead of Claude Sonnet 4.6, GLM-5.1, MiniMax-M2.7, and Grok 4.20, and behind Gemini 3.1 Pro Preview, GPT-5.4, and Claude Opus 4.6. Meta itself acknowledged the areas where Muse Spark lags, particularly in long-horizon agentic tasks and coding workflows, which positions it as a strong consumer and multimodal model rather than a developer-first coding assistant. Muse Spark is token-efficient for its intelligence level, using 58 million output tokens to run the Intelligence Index, comparable to Gemini 3.1 Pro Preview and notably lower than Claude Opus 4.6 at 157 million tokens.

For enterprise buyers evaluating AI procurement strategies, Meta’s token efficiency data matters independently of the headline benchmark scores. In production environments where inference costs compound across millions of user interactions, a model that delivers top-five intelligence at Gemini Flash-level token usage changes the total cost of ownership calculation meaningfully. Whether Meta can convert that efficiency advantage into paid API relationships, given that developers must log in with a Facebook or Instagram account to access the current consumer version, remains the central commercial question around Muse Spark’s eventual revenue trajectory.

Source: Artificial Analysis | https://artificialanalysis.ai/articles/muse-spark-everything-you-need-to-know

AI Workforce Pressures Intensify: MIT, Goldman, and PwC Data

Several new institutional research outputs published this week quantify the workforce transformation that has been accelerating alongside the capability announcements. PwC’s Global AI Jobs Barometer found that workers with advanced AI skills earn 56% more than peers in identical roles without those skills, while the Dallas Fed, examining wage premiums across 205 occupations, found that AI tends to automate the codifiable tasks that entry-level workers depend on while making experienced workers more valuable. Substack

The picture emerging from aggregate labor data is structurally uneven. Goldman Sachs, analyzing payroll data, found that the unemployment and wage gap between workers under 30 and workers aged 31 to 50 has widened sharply, while the IMF, drawing on job posting data across advanced economies, found that entry-level hiring is declining in AI-exposed fields even as wages rise for workers who hold those jobs. The manufacturing dimension of this story receives less coverage than white-collar displacement, but the numbers are material: a Deloitte survey of more than 3,200 global business leaders found that 58% are already using robotic systems guided by machine learning in their operations, with that figure rising to 80% when executives describe their plans for the next two years.

The policy gap is widening. The data show that wage premiums for AI-fluent workers are already substantial, that entry-level pipelines are contracting, and that manufacturing automation is proceeding on an accelerating timeline. None of the major AI governance frameworks being developed in Washington or Brussels is specifically designed to address the uneven distribution of those disruptions. For AI developers and investors celebrating the capability announcements that define this week’s news cycle, the workforce data represents the long tail of costs that the industry has not yet reckoned with.

Source: Future Forwarded | https://futureforwarded.substack.com/p/the-ai-labor-report-weekly-roundup

Comparative Model Benchmark Table — April 2026

ModelDeveloperSWE-bench VerifiedHLE ScoreIntelligence IndexGeneral Release
Claude Mythos PreviewAnthropic93.9%N/A (restricted)N/ANo (Project Glasswing only)
GPT-5.4OpenAIN/A41.6%Top 3Yes
Gemini 3.1 Pro PreviewGoogleN/A44.7%#1Yes
Claude Opus 4.6Anthropic80.8%N/ATop 4Yes
Muse SparkMetaAcknowledged gap39.9%52 (5th overall)Limited (Meta products + private API)

Key Analysis

For the past three years, the dominant storyline has been about which lab’s model scores highest on which benchmark. This week, the conversation moved to something harder to quantify: what happens when a general-purpose AI system becomes capable enough that releasing it publicly creates plausible systemic risk, and how should the industry, governments, and financial institutions respond?

Anthropic’s decision to withhold Mythos from the general market, while simultaneously briefing the Federal Reserve, convening banking executives, and publishing a 244-page system card, represents a novel form of frontier model release. It is neither a full public launch nor a purely internal research exercise. Project Glasswing is something the industry has not seen before: a capability disclosure framework designed to give defenders a lead over adversaries in a domain of cybersecurity, where the same model that finds vulnerabilities could be weaponized to exploit them. Whether that framework holds under commercial pressure, as competitors who may not apply the same restraint ship their own models, is the central tension to watch in the weeks ahead.

Meta’s Muse Spark debut adds a different dimension to that picture. The company’s pivot from open-weight Llama models to a proprietary, consumer-first product backed by a $115 to $135 billion capex commitment signals that Zuckerberg has concluded the AI race cannot be won by giving models away. The model’s competitive benchmark performance, combined with Meta’s distribution advantage across three billion daily active users across Facebook, Instagram, and WhatsApp, makes Muse Spark a more formidable entrant than the raw technical numbers might suggest in isolation. Distribution has always been the underrated variable in AI adoption, and no company has more of it than Meta.

The Intel-Terafab partnership, the Anthropic-Google-Broadcom compute deal, and Q1 layoff data showing nearly 38,000 AI-attributed job losses collectively tell a story about the physical and human infrastructure being reorganized around AI at a pace that policy and workforce planning have not matched. Anthropic’s revenue tripling from $9 billion to $30 billion run rate in roughly four months is not merely a financial milestone; it is evidence that enterprise AI adoption has crossed from evaluation to production at scale. The models generating that revenue are doing real work inside real organizations, and that transition from pilot to production is where the labor disruption data starts to matter most.

In the days ahead, the stories to watch include how Mythos Preview performs in practice across Project Glasswing partner deployments, whether Meta opens Muse Spark’s API to a broader developer audience and on what pricing terms, and how the Federal Reserve and Treasury translate this week’s emergency AI risk briefing into any formal guidance for the financial sector. OpenAI’s next model move, following the internal safety team scrutiny triggered by the New Yorker investigation, will also be closely watched. The AI industry has never been more capable, more commercially embedded, or more consequential, and the decisions made in the next 90 days will shape both its trajectory and its governance.

Leave a Reply

Your email address will not be published. Required fields are marked *