There is a statement still published on OpenAI's website that is worth reading again. It is from December 2015, the day the company launched. It reads:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

The statement goes further. It adds that OpenAI believes AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.
That is a remarkable thing to commit to publicly. Not a vague aspiration, a specific set of constraints on what the organisation would and would not do. Unconstrained by financial return. Broadly and evenly distributed. For the benefit of humanity as a whole. Researchers joined the organisation taking significant pay cuts because the mission was the point. The nonprofit structure was a deliberate design choice, not just a legal formality. It was a clear signal about where OpenAI's incentives would sit relative to every other commercial technology company working on AI at the time.
In early 2026, that statement came back into focus. Not because OpenAI brought it up, but because a large number of its users did.
In late January 2026, a loose coalition of activists launched a campaign called QuitGPT. The premise was straightforward: cancel your ChatGPT subscription and move to an alternative. What started as a few Reddit posts and a small website grew quickly. By the time the campaign's website quitgpt.org launched in early February, it had already drawn thousands of pledges. The organizers described themselves as activists for democracy.
The initial trigger was a political one. FEC filings revealed that OpenAI president Greg Brockman had personally donated $25 million to MAGA Inc., a pro-Trump super PAC. For many users, particularly developers and creative professionals who make up a large share of ChatGPT's power-user base that was enough.
But the movement stayed relatively contained until the end of February, when two things happened in quick succession that changed its scale entirely.
On 9 February 2026, OpenAI formally launched advertising inside ChatGPT. The rollout started with users on the free and entry-level tiers in the United States. Ads were contextual, appearing at the bottom of responses when a sponsored product was deemed relevant, and clearly labelled as sponsored content. Higher-tier subscribers, Plus, Pro, Business, Enterprise were kept ad-free.
OpenAI published a set of principles alongside the launch: ads would not influence ChatGPT's answers, conversations would remain private from advertisers, and sensitive categories like health, mental health, and politics would be excluded. Sam Altman had previously described advertising as a last resort. What changed was arithmetic. The company reportedly projects advertising could generate over $100 billion in non-subscription revenue over the next five years.
The reaction among users was mixed. Critics pointed to a genuine structural tension: ChatGPT's value rests on users trusting that its answers are objective. Introduce sponsorship, even clearly labelled, and that question becomes harder to fully resolve through policy statements alone. Anthropic, OpenAI's main competitor, ran Super Bowl commercials that month mocking the concept showing glassy-eyed actors playing AI chatbots delivering advice alongside poorly targeted adverts.
Then came the event that turned a simmering boycott into something much larger. On 28 February 2026, Sam Altman announced on X that OpenAI had reached an agreement with the US Department of Defense to deploy its models in classified military networks. The announcement came hours after Anthropic's own Pentagon contract had collapsed. Anthropic had maintained two firm conditions throughout months of negotiation: no use of its AI for domestic mass surveillance of American citizens, and no use for autonomous weapons systems. The Pentagon wanted both removed. Anthropic's CEO Dario Amodei refused. The US government responded by labelling Anthropic a supply chain risk to national security a designation normally reserved for foreign adversaries and ordered federal agencies to stop using its technology.
OpenAI stepped in the same evening. Altman said the deal included prohibitions on domestic mass surveillance and autonomous weapons targeting. But the contrast was immediate and visible. Two companies, facing the same contract, had made opposite choices. Within hours, users responded.
According to data from market intelligence firm Sensor Tower, ChatGPT uninstalls on mobile jumped 295 per cent day-over-day on 28 February. One-star reviews for ChatGPT surged 775 per cent the same day. Claude, which had sat at number 42 in the US App Store in January, reached number one. The QuitGPT campaign claimed 2.5 million pledges within 72 hours. A Reddit thread urging users to cancel drew 30,000 upvotes. Anthropic reported free users grew more than 60 per cent since January, with daily signups breaking all-time records every day of that final week of February.
Altman later acknowledged the announcement had been rushed. He wrote that he genuinely wanted to de-escalate things but admitted it looked opportunistic and sloppy. More than 700 employees across Google and OpenAI signed an open letter titled We Will Not Be Divided, urging their leaders to stand firm against the Pentagon's demands. Legal analysts examining the contract language raised questions about whether the stated protections were enforceable. The full contract was not released publicly.
This is where it helps to go back to 2015. OpenAI's founding language was specific. It did not say OpenAI would try to benefit humanity, or aim to, or aspire to. It said the goal was to advance AI in the way most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. And it said AI should be as broadly and evenly distributed as possible, an extension of individual human wills.
Around 2025, the mission statement was quietly updated. The original commitment to ensuring that AGI benefits humanity was changed to building beneficial AI. It is a small word, but not a trivial one. Ensuring an outcome implies accountability. Building something you intend to be beneficial is an aspiration. The edit was not widely reported. MIT Technology Review later described the Pentagon deal as a visible inflection point in OpenAI's mission drift from developing AI for the benefit of humanity to commercial and military revenue.
The structural shift had been building for years. By 2019, OpenAI had created a for-profit subsidiary to raise the capital its research required. In 2025, it restructured again into a public benefit corporation, with the nonprofit retaining roughly a 26 per cent equity stake valued at around $130 billion. The organisation that had been explicitly unconstrained by financial return was now valued at over $300 billion, showing ads, and deploying technology on classified military networks.
Each individual decision has a defensible case behind it.
Advertising can fund access for users who cannot pay. Supplying AI to a military that will use AI anyway has an argument: better with safeguards than without. The structural shift to a public benefit corporation was arguably necessary to remain competitive. Taken one at a time, none of these is straightforwardly indefensible.
But the #QuitGPT movement is not really about any single decision. It is about the accumulation of them, measured against a specific founding promise that is still sitting on OpenAI's website. The users who deleted the app on 28 February were not making a technical judgment about model quality. They were making a judgment about the gap between what was promised in 2015 and what they were seeing in 2026. That gap is what the debate is actually about.
For technology leaders, the broader point is worth sitting with. Mission statements are real signals of intent at the moment they are written. They attract talent, build trust, and define expectations. They are not, however, binding constraints. Organisations change as they scale and as commercial pressures accumulate. The question worth asking of any AI partner or any technology company making strong public commitments is not just what they say today, but what structural mechanisms exist to hold those values in place when the numbers on the table become large enough to make flexibility look reasonable. That is the harder question. It is also the right one.
At Itsavirus, we help technology leaders make clearer decisions about AI integration, strategy, and long-term execution. If this is relevant to conversations you are having internally, feel free to reach out.