For a few years in the mid-2000s, Digg was the front page of the internet.
Founded in 2004 by Kevin Rose, it grew into one of Silicon Valley's most-watched startups. By 2008, Digg was attracting over 236 million visitors a year. Google entered advanced acquisition talks to buy it for a reported $200 million. The deal fell through. Users voted stories up or down. The best rose to the top. Reddit launched the year after Digg, in 2005. For a while they coexisted and competed directly.
Even at its peak, Digg had trust problems. As early as 2006, users tried to game the voting system. Organized groups buried content they disagreed with. Others charged hundreds of dollars to push stories to the front page. Digg hired engineers to build diversity algorithms to fight it. It was a constant arms race, but a manageable one, because gaming the system still required humans spending real time.
Then Digg redesigned in 2010. The community revolted. Its audience dropped by a quarter in a single month. Much of its user base migrated to Reddit. Ohanian, Reddit's co-founder, wrote an open letter at the time, arguing that the redesign had departed from what made Digg special: giving the power back to the people.
The site never recovered. In 2012, the company that once nearly sold for $200 million was broken up and sold in parts. The brand went for $500,000. The domain changed hands again. The name became a footnote.
Last year, Rose bought it back, together with Ohanian. They raised money, rebuilt from scratch, and opened a Groundbreakers early-access program that hit its 23,000-member cap within days. The demand was real. In January 2026, they launched a full public beta. The vision: a modern community platform with real moderation, verified users, and an open algorithm. A second chance.
On March 13, 2026, eight weeks after launch, the CEO published a post on the homepage. Most of the team was gone. The app was gone. The post-mortem was brief:
"Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."Justin Mezzell, CEO of Digg, March 13, 2026
The bot problem was real. So was another problem the post-mortem buried in a single sentence: "positioning Digg as simply an alternative to incumbents wasn't imaginative enough." This essay is about the first problem, the one that doesn't get fixed by better moderation, and why it's going to keep happening to every platform that comes after Digg.
Digg's failure is being reported as a moderation story: overwhelmed by bots, couldn't keep up, needed better vendors. Some outlets are framing it as a Dead Internet moment, the long-running theory that most online activity is already synthetic. That framing is closer to the truth, but it still misses the actual mechanism.
What hit Digg wasn't chaos. It was the same incentive that's driven spam since Google made links valuable: flood a platform with inauthentic content, extract the authority, move on. For years that required real human effort: content farms, hired posters, organized link schemes. Costly, detectable, containable. What changed is the cost dropped to near-zero.
Same week. Two stories. Nobody connected them.
"A registry where agents are verified and tethered to human owners." (Vishal Shah, VP of AI Products at Meta, as reported by Axios, March 10, 2026.) The identity infrastructure for AI agents, purchased and scaled by the largest social platform on earth.
Tens of thousands of accounts, many producing human-quality AI content indistinguishable from genuine posts. Industry-standard vendors. None of it was enough. When you can't trust the votes, comments, and links, the platform has nothing.
Digg was killed by the unverified version of AI agents: anonymous, unaccountable, cheap to spin up, impossible to distinguish from genuine users. Moltbook built the verified version: every agent linked to a real human account, every action technically attributable to an owner. Meta bought it.
Think about what that means. The "Wild West" killed Digg. Meta just bought the deed to the ranch.
The identity layer for AI agents is going to be owned by whoever controls the registry. Right now that's Meta. Meta has done this before. Instagram was a small photo app. WhatsApp was a scrappy messenger. Meta bought both, scaled them to billions of users, and they became defining infrastructure. Moltbook is tiny today. The question is whether its agent registry becomes what WhatsApp became for messaging. When one company with 3.6 billion daily active users decides which agents are verified, the question changes. It's no longer "how do we stop inauthentic content?" It becomes: who gets to decide what authentic even means?
This isn't humans versus bots. It's small platforms versus whoever ends up owning the identity infrastructure. Digg was an early casualty. The thing being fought over is much bigger than Digg.
Every platform that collapses sends its users to Reddit, Meta, or Discord.
Every platform that collapses under inauthentic traffic sends its users somewhere. They go to Reddit, Meta, Discord, the platforms big enough to defend themselves. Authority arbitrage is a tax on being small. Every small platform that dies makes the big ones stronger. Nobody writes about that part.
Content is infinite. Engagement is infinite. Only one thing is scarce now.
The existing internet business model rewards manufacturing engagement. The raw material of authority arbitrage is now free. The thing that became genuinely scarce: verified human presence. There is no price signal for it anywhere online. The first platform that figures out how to charge for verified human presence will be hard to kill.
Friction is the product.
The right question is not "how do we build trust?" That has no good answer when agents participate for free. The right question: what creates friction for agents that does not create friction for humans? That's a narrow design space. Most current platforms fail the test.
Survivors will look like clubs, not networks.
The invite-only answer is obvious. Everyone lands on it. A $5 fee killed 99% of Metafilter's spam in 2001. It still works. What's less obvious is what happens at scale.
The clubs survive. But Meta just bought the verified identity layer for AI agents. If one company owns the registry that says which agents are real and which aren't, then the question isn't "how do we stop fake content?" The question is who controls the definition of real.
Small communities with real friction will be fine. The problem is the platforms that reach a billion people. Those are consolidating around whoever controls the identity infrastructure. Right now that's shaping up to be Meta. The open internet isn't just being overrun. It's being enclosed.