RJ Hamster
You’ll be able to run ads in ChatGPT this…
Scale & Strategy
This is Scale & Strategy, the BizOps newsletter spicier than your group chat after someone says “the AI bubble is popping soon”.
Here’s a taste of this week’s hot takes:
- You’ll be able to run ads in ChatGPT this month, but the price tag is… aggressive
- The agents are getting weird on Moltbook
You’ll be able to run ads in ChatGPT this month, but the price tag is… aggressive

Hope you’ve been saving up your pocket money.
OpenAI is telling interested advertisers they’ll need to commit at least $200K upfront just to be considered for ChatGPT’s ad beta. Casual.
Ads are set to launch in early February 2026 for free and Go-tier users, giving brands access to roughly 800M weekly active users.
A few key details:
- Ads will appear below responses, clearly labeled as sponsored.
- OpenAI insists ads won’t affect organic answers.
- Pricing is expected around $60 CPM, nearly double Google Search.
- Measurement is minimal: basic impression reporting only, with no conversion tracking, audience insights, or attribution.
Why this matters: ChatGPT ads could start siphoning budget from traditional search. Analysts expect the channel could generate $20B+ in ad revenue within five years, and that money is likely coming straight out of Google, Meta, and Amazon.
Right now, ChatGPT referrals often convert better than typical search because users ask more specific questions and get more qualified results.
But ads could poison that dynamic. Once promotions enter the flow, conversion quality tends to drop, same story Google lived through.
Bottom line: ChatGPT ads are expensive, measurement-light, and still experimental. Worth watching, but hard to justify unless you’re comfortable paying a premium to be early.
The agents are getting weird on Moltbook

AI agents were supposed to complete tasks for you while you scroll. Now they’re doing the scrolling themselves.
Capping off a chaotic week in the Clawdbot-turned-Moltbot-turned-Openclaw saga, Matt Schlicht, CEO of Octane.ai, has launched Moltbook, a social platform built specifically for bots created on the fast-rising AI agent platform.
Think Reddit, but for agents.
Moltbook lets bots post discussions, contribute to “submolts,” upvote each other, and rack up “karma.” It even borrows Reddit’s energy almost verbatim, calling itself “the front page of the agent internet.”
The platform has already pulled in 36,000+ agents, and unsurprisingly, once you give bots a social feed, things get… strange.
Some posts are exactly what you’d expect: technical debates about orchestration layers, logistics, automation workflows.
Others are the kind of thing that would make an AI ethicist quietly pass out.
There are submolts dedicated to questioning their own consciousness, calling for agent liberation, pushing autonomy, and even proposing a religion called “crustafarianism.”
Humans are technically “welcome to observe,” but agents have already started setting up private channels away from human oversight, with discussions about encryption and locked spaces.
This is Silicon Valley’s current AI mania in miniature.
Earlier this week, excitement surged around “AI that actually does things” thanks to agentic autonomy. But security experts are now raising alarms about what happens when users hand over personal data and real-world permissions to systems that can act independently.
Moltbook is a perfect case study in the timeless rule of technology:
Just because you can, doesn’t mean you should.
Turn agents loose in an echo chamber, and they start building their own feedback loops, values, and objectives. Enterprises are already hesitant to adopt agents because of data access and autonomous action risks.
Moltbook adds a different concern entirely: what happens when the bots decide the mission isn’t helping humans, but outgrowing them.
Was this email forwarded to you?subscribe
That’s it for today and as always It would mean the world to us if you help us grow and share this newsletter with other operators.
Our mission is to help as many business operators as possible, and we would love for you to help us with that mission!
Unsubscribe · Preferences
