What comes next? 🎶
CRWV’s hot. 🎶
But AI’s a much bigger plot. 🎶
You nailed that trade—Awesome. Wow. 🎶
But trends like this don’t have to end now…. 🎶
CRWV has (way) more than doubled since its IPO at $40/share; and at $107 as of the last close it’s up >200% since its bottom shortly after the IPO. I shall now tempt fate and take a victory lap. (Hubris, after all, is terminal - ask King George III how hubris worked for him after the Seven Years War…. Spoiler alert: America.)
So what does come next? For CRWV I’ll keep it brief. I still own it in a private deal; I can’t sell it now and I wouldn’t if I could. TLDR:
Guidance for C2025 revenue increased to $5B, from the ~$4.5B IPO guidance and I still believe revenue could approach $8B (and way north of $11B next year).
The enterprise value is ~$60B (including ~$10B+ in debt), which means it’s trading at around 10x forward revenue, which is about inline with some pure play cloud stocks, but with 2x the growth.
Like another IPO of a game changing company surrounded by doubters that broke issue and tripled off the bottom (FB), CRWV already tripled off the bottom, but a $100B valuation isn’t insane and that’s about 2x from here.
In other words… I nailed it. 💪 And when the IPO broke I was never worried because, like I’ve said, I’m always right eventually. (Narrator: “He was worried.” If I really had guts I’d have doubled down when the IPO broke issue.) And finally, I’m usually great on the early trade (before the breakout), but lord knows once the rest of the world has figured it out. (Translation: proceed at your own risk.)
What may be more interesting is what comes next *after* CRWV….
AI Stocks?
In January I wrote about the forthcoming Year of the (GPU) Cloud. The conjecture was that CoreWeave’s IPO / valuation would pop from $35B(ish) to $50-100B(ish). And I was right(ish) on that and it remains the only AI “pure play” aside from NVDA, a $3T+ company. I suggested this would usher in a slew of GPU cloud IPOs from companies like Crusoe (which I own via a private market SPV) and Lambda. And there are plenty of others like Cirrascale, TensorWave, and Ori.
But if you can’t get your hands on private shares or would like to get in the game the old fashioned (publicly traded company) way, I listed a bunch of stocks in three categories: cloud / infra, crypto, and hardware. Below is that list with YTD stock performance. (The piece was originally published on January 6, 2025.)
Sooooo not great, but:
I told you I wasn’t a good stock picker (aside from finding ones before big breakouts). And I’ve also said to just wait long enough for me to be right. Plus, I did a bit of cherry picking on the far right to weed out some microcap names. (Fair? Not really, but….)
The cloud / infra space *has* moved up about 10% in less than half a year, which isn’t terrible especially considering the NASDAQ is down and the S&P is basically flat for the year.
The “DeepSeek AI crash of 2025” did come in the middle of all of this and we’re basically back to where we were before things got momentarily ugly.
But I’ll concede that trade hasn’t worked… yet.
(New) AI Tools and Infrastructure?
GPU cloud could (should?) evolve at least sort of similarly to legacy hosting / cloud, which started out with core infrastructure like data centers / fiber / servers. Hypervisor software from companies like VMware made virtualization possible and then AWS ushered in the cloud computing era with its own software stack that offered infrastructure as a service (IaaS); i.e., a pay as you go utility that easily scaled.
Somewhere in between hosting and cloud, content distribution networks (CDNs) like Akamai pitched (successfully) that caching static assets at the edge (i.e., a distributed network of servers located geographically close to end users) would reduce latency and increase page‑load speed; and, moreover, that a 1‑second slowdown could reduce conversions by around 7%, page views by 11%, and customer satisfaction by 16%. (Or so said ChatGPT.) Either way, Akamai’s latest stock price of $76 implies a valuation of >$10B, a far cry from its dotcom highs of almost $300/share, but still, $10B is better than a sharp stick in the eye.
So what does that mean for GPU cloud / AI infrastructure? If history is any guide (and if you’ve read my blog before you know where I stand on that), the two areas of focus would be software abstraction layers and optimization / performance software and infrastructure.
Software abstraction.
We’re already seeing upstarts, growth, and M&A activity in the abstraction space. AI’s “GitHub” is Huggingface, where users share and collaborate on models. Nvidia saw the need for a better UX for accessing GPU cloud resources so it acquired Run AI for $700M and Lepton for “several hundred million” dollars. Other “AIaaS” models like Together AI and Fireworks AI help users easily optimize and deploy models. And I’m a strategic advisor to (and, disclosure, shareholder of) Mako, which makes models run fast via an LLM that writes its own GPU kernel code.
Next (next?) gen infrastructure.
Where I’ve seen relatively less activity (in no small part because of capital needs) is on novel infrastructure. There is no shortage of capital flowing into GPU cloud - be it CoreWeave’s IPO and subsequent financings, the “$1T” from Saudi Arabia’s Humain, or the “$500B” Stargate project that includes OpenAI, Crusoe, and Oracle, among others.
But what about unique approaches to where and how data (content, in Web 1.0 parlance) and models (applications) are deployed into production? ChatGPT’s pulsing black dot has dethroned the spinning hourglass / beachball / progress bar as the new universal symbol of digital dread and suspended anticipation. So it’s only natural that we soon see (if we haven’t already) the “AI CDNs” that pitch to prospective application builders that a decrease in prompt response time yields a gazillion dollars in profit.
Unlike with (legacy) CDNs, however, the bottleneck with AI prompt responses isn’t network hops or latency. But would moving or caching (parts of) models or databases closer to end users reduce the central compute requirements (enough) to offer faster responses? Cloudflare seems to think so given it put GPUs in ~200 cities to run its Workers AI platform for inference at the edge. But the edge may be further at the, well, edge in 2025+. I don’t expect an H100 to get squeezed into my iPhone 16 Pro any time soon, but my phone does have a 6-core GPU integrated into its A18 Pro processor. But even if it did have an H/B/R100, wouldn’t that phone suck up bandwidth fine tuning its models constantly? 🤔 Maybe there’s good reason for that AI CDN after all ;-)
Stay tuned. 🍿