Caitlin Kalinowski, who led OpenAI’s robotics division, announced her resignation on March 7, 2026, highlighting deep concerns about the company’s recent agreement with the Pentagon. In a post on X, she described the decision as one rooted in principle, stating that AI plays a vital role in national security but that issues like surveillance of Americans without judicial oversight and lethal autonomy without human authorization required far more careful discussion.
Kalinowski, who joined OpenAI in November 2024 after leading augmented reality hardware efforts at Meta, emphasized that her departure stemmed from the rushed nature of the announcement. “The announcement was rushed without the guardrails defined,” she wrote. “It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.”
The agreement, finalized in late February 2026, allows OpenAI’s AI models to be deployed in classified military environments for applications such as cybersecurity, intelligence analysis, and logistics. OpenAI outlined a multi-layered approach to safeguards, including cloud-only deployment, retention of its safety stack, and contractual prohibitions against three key red lines: mass domestic surveillance of U.S. persons, autonomous weapon systems, and high-stakes automated decisions resembling social credit systems.
The deal followed a breakdown in talks between the Pentagon and rival Anthropic, whose CEO Dario Amodei insisted on strict limits against mass surveillance and fully autonomous lethal uses. The Trump administration directed federal agencies to halt use of Anthropic’s technology, and the Pentagon labeled the company a supply chain risk; a designation typically applied to foreign adversaries. OpenAI stepped in, describing its pact as providing a “workable path for responsible national security uses of AI” while upholding similar red lines.
OpenAI confirmed Kalinowski’s exit and reiterated its position in statements to outlets including TechCrunch and Reuters. The company pledged ongoing dialogue with employees, government officials, civil society, and global communities on these sensitive topics.
The partnership has sparked broader controversy. Consumer backlash was swift: Sensor Tower data showed U.S. uninstalls of the ChatGPT mobile app surged 295% on the day following the deal’s announcement compared to typical rates. Downloads declined as well, while Anthropic’s Claude app saw significant gains, climbing to the top of U.S. App Store free charts in early March.
CEO Sam Altman acknowledged the agreement appeared “opportunistic and sloppy” in public comments, noting amendments to strengthen language against intentional domestic surveillance of U.S. persons and nationals. Intelligence agencies would require follow-on modifications for access.
This episode underscores growing tensions in the AI industry over military applications. As companies race to secure government contracts, building on a prior $200 million multi-vendor deal from June 2025 involving OpenAI, Anthropic, Google, and xAI, the debate intensifies about balancing innovation with ethical constraints. Kalinowski’s resignation, from a senior leader in physical AI systems, amplifies calls for deliberate governance in an era where AI’s role in defense grows ever more prominent.
