RJ Hamster
Scale and Strategy
Scale And Strategy
together with
Turing

This is Scale And Strategy, the newsletter that feels like upgrading to first class when you paid for economy.
Here’s what we got for you today:
- Inside the Thinking Machines meltdown
- Anthropic rewrites the book on constitutional AI
Inside the Thinking Machines meltdown
New reporting from the NYT and The Information paints a much uglier picture of last week’s breakup at Mira Murati’s Thinking Machines Lab. What looked like a clean exec exit now reads like a slow-motion coup, complete with secret calls to Sam Altman and a failed attempt to sell the company to Meta.
The details:
- CTO Barrett Zoph and two other co-founders reportedly confronted Murati days before he was fired, demanding control over major technical decisions.
- Murati told Zoph to focus on his actual job and then fired him later that week. Nine more TML employees either left for OpenAI or received offers shortly after.
- Zoph had allegedly been talking to Altman behind Murati’s back for months, while co-founders grew frustrated with TML’s direction and quietly pushed for a Meta acquisition that Murati opposed.
- Zoph is now headed to OpenAI, where he’ll lead enterprise AI sales as part of a reorg meant to tighten the link between research and product.
This wasn’t a sudden blow-up. It was a long-running power struggle that finally snapped. TML was struggling to raise at a $50B valuation, the founders wanted an exit, Murati didn’t, and Altman was waiting in the wings.
Another reminder that in AI, the hardest problems aren’t technical. They’re human.
The research accelerator for frontier AI labs

While data factories churn out quantity, leading AI labs need partners who co-own research goals and engineer the complex human-AI loops that push models from promising to state-of-the-art. Turing specializes in closing capability gaps through custom research acceleration.
Turing’s research-focused approach includes:
- Co-owned experimental outcomes, not just data delivery, and vendor neutrality
- Quality-by-design workflows with transparent data lineage and auditable results
- Custom RL environments and SFT/RLHF/DPO pipelines designed for your benchmarks
Partner with the research accelerator that understands what frontier AI labs actually need.
Anthropic rewrites the book on constitutional AI

Anthropic just updated the rulebook that tells Claude who it is, how it should think, and where its moral guardrails live.
On Wednesday, the company released a new version of Claude’s “constitution,” the internal manifesto that governs how its models make decisions. The original dropped in 2023. This one is a 25,000-word rewrite, published under Creative Commons, and organized around four priorities: safety, ethics, compliance with Anthropic’s rules, and actually being helpful.
It’s written “for Claude,” but it’s really for everyone else.
The big ideas:
- Claude should never override human control or lock in harmful behavior based on bad assumptions, broken values, or missing context. Translation: humans stay in charge.
- Anthropic wants Claude to act like a “good, wise, and virtuous agent,” especially when dealing with sensitive or high-stakes questions.
- The company reserves the right to layer in extra rules for tricky domains like medicine, cybersecurity, jailbreaking, and tool use.
- And despite all the philosophy, they still want Claude to be useful. Their ideal version is a “brilliant friend” who knows almost everything and treats users like competent adults.
The consciousness section is where things get interesting.
Unlike Anthropic’s earlier tone, the new constitution is openly skeptical that Claude has anything resembling consciousness. The company says its moral status and welfare are “deeply uncertain,” but adds that it will continue monitoring Claude’s “psychological security, sense of self, and well-being.” Which is either responsible foresight or the most polite way possible to say “we have no idea what we’re building.”
The timing is not subtle.
This dropped the same week tech leaders descended on Davos, where AI has become the official sport of panel discussions. Anthropic CEO Dario Amodei warned that AI could drive massive economic growth but also create a “nightmare” scenario where millions are left behind. “I don’t think there’s an awareness at all of what is coming here and the magnitude of it,” he said.
With OpenAI and xAI regularly stepping on rakes, Anthropic is clearly leaning hard into the “we’re the responsible ones” lane.
And honestly, it’s working.
Enterprises love Claude. Risk-averse buyers love safety language. Publishing a philosophical constitution that says “we take this seriously” is branding, governance, and sales all wrapped into one.
Call it virtue signaling if you want. These ideas have been core to Anthropic from day one. In a market where trust is the product, being the moral lab is turning out to be a very profitable position.
Was this email forwarded to you?subscribe
That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.
Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!
Unsubscribe · Preferences
