Skip to Content

What the Anthropic AI safety saga is really all about

By Lisa Eadicicco, David Goldman, CNN

(CNN) — Anthropic has reached a familiar crossroads for a growing tech company: how to scale without compromising the principles that set it apart.

The AI company has made safety its guiding principle. It advocated for AI regulation and called for worker protections as AI replaces some human tasks. Anthropic has worked hard to send a specific message to customers: We’re the good guys.

Yet the self-imposed guardrails the company laid down to build that brand may now be forming obstacles to its success.

This week, the Pentagon gave Anthropic an ultimatum: Drop your AI ethical restrictions or lose your $200 million contract and face a blacklisting. Separately, also this week, Anthropic loosened its core safety policy to allow the company more freedom to grow in a competitive and fast-moving market.

It’s unclear how this week will play out for Anthropic’s business and its reputation, but its decisions will be consequential.

We know that, because Anthropic’s dilemma is a familiar one in the tech industry. Many companies tout their values and morality, only to be confronted with tough decisions that force them to choose between growth and maintaining those ideals.

Anthropic may want to take note.

OpenAI and the weekend of Sam

Just over two years ago, Anthropic’s biggest rival grappled with dissent over growth at the cost of safety.

In one of the most bizarre boardroom dramas in corporate history, Anthropic’s chief rival OpenAI abruptly fired its founder and CEO Sam Altman on a November Friday in 2023, only to rehire him the following Tuesday.

The saga involved a unique corporate structure that placed the fast-growing, for-profit company behind ChatGPT under the auspices of a nonprofit board. Four years earlier, the company had written into its charter that OpenAI remained “concerned” about AI’s potential to “cause rapid change” for humanity. The company’s overseers feared that Altman was moving so fast that he risked undermining the safety the company pledged to provide.

But firing Altman led to threats of a mass exodus of employees – an untenable situation that could have led to the destruction of the company. So the board just days later rehired Altman. The board dissolved soon after, and Altman changed the corporate structure last year to free itself of its nonprofit overseer.

OpenAI has since struggled to balance speed and safety, facing several lawsuits that claim its products convinced young people to harm themselves. OpenAI denies those claims.

Apple and the San Bernardino shooter

Syed Farook and his wife, Tashfeen Malik, murdered 14 people at the Inland Regional Center in San Bernardino, California, in December 2015. The couple later died in a shootout with police.

Investigators obtained permission to retrieve data from Farook’s iPhone, but they couldn’t get inside because it had been locked with a passcode. A California judge ordered Apple to help law enforcement officers access the phone.

But in an open letter, signed by Apple CEO Tim Cook, the company refused. Cook said the judge’s order would open “a backdoor to the iPhone,” which was “something we consider too dangerous to create.” The company said it had no sympathy for terrorists, but complying with the order would give government authorities “power to reach into anyone’s device to capture their data.”

Apple received tremendous flak for its decision – including from then-presidential candidate Donald Trump. But it has since garnered widespread praise for standing up for its customers’ privacy, which has since become synonymous with the company’s brand.

The company now routinely touts that it won’t sell customer data or store certain personal information on its servers, trying to differentiate itself from Google, one of its main competitors.

Etsy vs. solo sellers

As Amazon’s e-commerce empire was just starting to take off in the early 2000s, Etsy emerged as a novel alternative where shoppers could find unique handmade goods.

But it made a controversial change in 2013 that threatened to challenge that ethos. It broadened its policy to allow sellers to use manufacturers and outsource operations, sparking concerns at the time that it would no longer provide a fair playing field for small independent sellers without the resources to hire staff.

Still, that decision was critical for Etsy to expand into the marketplace it is today, which now offers more than 100 million items for sale and roughly 8 million active sellers.

“From a business point of view, it worked out for Etsy, but it was a difficult moment for the company,” said Arun Sundararajan, director of NYU Stern’s Fubon Center for Technology, Business and Innovation.

What’s next for Anthropic

These case studies offer a cautionary roadmap for Anthropic.

Now, the biggest near-term consequence for Anthropic is likely how clients and potential customers value and trust the company, said Owen Daniels, associate director of analysis at Georgetown’s Center for Security and Emerging Technology.

Anthropic said its self-imposed safety measures were always meant to be flexible and subject to change as AI evolves. It pledged to be transparent about safety in the future and said it really didn’t have a choice: If it stopped growing, rivals that don’t value safety as much could push ahead and make AI “less safe” overall.

It’s unclear what will come of Anthropic’s change, because AI’s existential risks are still largely “conceptual,” noted Sundararajan.

He said he’d be skeptical of any expert who called this an important moment in AI safety. But it could be an important moment for the company.

“Pulling back from a particular safety promise here by Anthropic, to me, is more about Anthropic and less about the future of AI,” he said.

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KIFI Local News 8 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.