- Partner Grow
- Posts
- CoreWeave: AI Gold Rush
CoreWeave: AI Gold Rush
CoreWeave S1 Deep Dive
👋 Hi, it’s Rohit Malhotra and welcome to Partner Growth Newsletter, my weekly newsletter doing deep dives into the fastest-growing startups and S1 briefs. Subscribe to join readers who get Partner Growth delivered to their inbox every Wednesday morning.
Latest posts
If you’re new, not yet a subscriber, or just plain missed it, here are some of our recent editions.
Partners
Billionaires wanted it, but 66,737 everyday investors got it first… and profited
When incredibly rare and valuable assets come up for sale, it's typically the wealthiest people that end up taking home an amazing investment. But not always…
One platform is taking on the billionaires at their own game, buying up and offering shares of some of history’s most prized blue-chip artworks for its investors. In just the last few years, those investors realized representative annualized net returns like +17.6%, +17.8% and +21.5% (among assets held 1+ year).
It's called Masterworks. Their nearly $1 billion collection includes works by greats like Banksy, Picasso, and Warhol, all of which are collectively invested in by everyday investors. When Masterworks sells a painting – like the 23 it's already sold – investors reap their portion of any profits.
It's easy to get started, but offerings can sell out in minutes.
Past performance not indicative of future returns. Investing Involves Risk. See Important Disclosures at masterworks.com/cd.
Interested in sponsoring these emails? See our partnership options here.
Subscribe to the Life Self Mastery podcast, which guides you on getting funding and allowing your business to grow rocketship.
Previous guests include Guy Kawasaki, Brad Feld, James Clear, Nick Huber, Shu Nyatta and 350+ incredible guests.
S1 Deep Dive
CoreWeave in one minute

CoreWeave, a cloud provider focused exclusively on AI, has filed to go public. Here’s a closer look at the business, the financials, and what it means for the AI market.
CoreWeave is a specialized cloud infrastructure provider built from the ground up for artificial intelligence. Unlike general-purpose cloud platforms like Amazon Web Services (AWS), CoreWeave focuses solely on AI workloads, offering high-performance GPUs—the chips essential for running complex AI models.
The business model is straightforward but powerful: CoreWeave buys GPUs from NVIDIA and rents them out to companies like Meta, Microsoft, and AI startups at a markup. If AWS is a massive all-you-can-eat buffet for cloud computing, CoreWeave is a high-end sushi bar that serves only AI customers. Let’s keep reading!
Introduction
CoreWeave isn’t just another cloud provider—it’s building the foundation for the next generation of artificial intelligence. While Amazon Web Services (AWS) offers general-purpose cloud computing, CoreWeave is laser-focused on AI. Its cloud platform is designed from the ground up to deliver the raw GPU power that AI models need to function and scale.
CoreWeave’s customers aren’t small startups—they’re some of the biggest players in AI: Meta, Microsoft, NVIDIA, Cohere, and Mistral. The platform powers the most ambitious AI models in the world, handling complex workloads with precision and speed.
Traditional cloud infrastructure wasn’t built for AI. AWS and Google Cloud were designed for websites, SaaS apps, and data storage—not the compute-heavy demands of AI training and inference. CoreWeave solves that problem by building a cloud platform purpose-built for AI from the ground up.
CoreWeave sources GPUs from NVIDIA and rents them out to AI labs and enterprises. The model is simple but powerful: buy compute power and sell it at a premium. In June 2023, CoreWeave’s NVIDIA H100 Tensor Core GPU training cluster completed the MLPerf benchmark test in 11 minutes—29x faster than the next-best competitor. That speed translates into faster model training and faster deployment for customers.
“We generate revenue by selling access to our AI infrastructure and proprietary managed software and application services through our CoreWeave Cloud Platform.”
— S-1 Filing
CoreWeave’s Remaining Performance Obligations (RPO)—essentially the value of its signed multi-year contracts—stands at $15.1B as of December 31, 2024, up 53% from the previous year. That’s a huge backlog, providing a clear path to future revenue.
And customers are expanding their commitments. Three of CoreWeave’s top five customers signed up for additional capacity within 12 months of their original contracts—adding $7.8B in new commitments.
AI is reshaping the global economy. IDC estimates that AI will generate $20 trillion in economic impact by 2030—equivalent to 3.5% of global GDP. But AI workloads are different from traditional computing. They require massive amounts of compute power, fast processing speeds, and infrastructure optimized for complex neural networks.
That’s where CoreWeave comes in. The company isn’t just building infrastructure—it’s enabling AI labs and enterprises to build and scale the next generation of AI products. Its platform combines raw compute power with managed services, networking, and automation tools to streamline AI model training and deployment.
CoreWeave’s infrastructure runs on a network of 32 purpose-built data centers across major metro areas, powered by over 250,000 GPUs and supported by 360 MW of active power. The company has already secured an additional 1.3 GW of contracted power to scale further.

CoreWeave’s partnerships with NVIDIA, original equipment manufacturers (OEMs), and software providers give it a strategic edge. Its ability to rapidly scale infrastructure and secure GPU supply gives it an advantage over competitors struggling to access high-end chips.
CoreWeave isn’t just building an AI cloud—it’s shaping the future of artificial intelligence.
History
CoreWeave was founded in 2017 by Michael Intrator, Brian Venturo, and Brannan Matherson with a clear vision: to build a cloud infrastructure specifically designed for the compute-heavy demands of artificial intelligence. While the major cloud providers like Amazon Web Services (AWS) and Microsoft Azure were focused on general-purpose cloud computing, CoreWeave set out to create a platform optimized for AI workloads, providing the high-performance GPUs that AI models require to function at scale.

The idea came from the founders’ early experiences in cryptocurrency mining. They recognized that the same high-performance GPUs used for mining could be repurposed to power AI models—a market that was just beginning to emerge. Seeing the growing demand for AI-specific compute infrastructure, they shifted their focus to AI and machine learning.
In the early years, CoreWeave focused on building out its infrastructure and securing GPU supply from NVIDIA. The company positioned itself as a strategic partner for AI labs and enterprises, providing scalable GPU power at a time when access to high-end chips was becoming increasingly scarce.
By 2022, CoreWeave had gained traction with major AI customers, including Meta, Microsoft, and Cohere. Its ability to source GPUs from NVIDIA gave it a competitive edge, allowing it to offer superior compute power and faster model training times compared to traditional cloud providers.
In June 2023, CoreWeave’s NVIDIA H100 Tensor Core GPU training cluster set a new MLPerf benchmark record, training a model in 11 minutes—29 times faster than the next-best competitor. This milestone underscored CoreWeave’s ability to deliver industry-leading performance for AI workloads.
To meet growing demand, CoreWeave rapidly expanded its infrastructure. By the end of 2024, the company operated 32 data centers running over 250,000 GPUs with a combined power capacity of 360 MW. CoreWeave also secured an additional 1.3 GW of contracted power to support future growth.
Despite rapid growth, profitability remains a challenge. CoreWeave reported net losses of $31M in 2022, $594M in 2023, and $863M in 2024. However, the company’s long-term contracts—making up 96% of total revenue—provide strong revenue visibility and stability.

CoreWeave’s Remaining Performance Obligations (RPO), which reflect the value of signed contracts, reached $15.1B by the end of 2024, up 53% from the previous year. This backlog reflects the confidence of CoreWeave’s customers in the platform’s ability to deliver high-performance AI infrastructure.
Risk factors
CoreWeave’s explosive growth comes with significant challenges. Revenue jumped from $16 million in 2022 to $1.9 billion in 2024—a remarkable pace that’s unlikely to continue. Sustaining momentum will depend on execution, customer retention, and market conditions—many of which are outside the company’s control.
Growth Won’t Last Forever
CoreWeave’s revenue grew 1,346% in 2023 and 737% in 2024, but growth will likely slow as the business matures. Maintaining momentum will require:
Expanding into new markets
Retaining key customers
Competing with AWS, Microsoft, and Google
Innovating to keep pace with AI model demands
If customer acquisition slows or existing clients reduce spending, growth could stall quickly.
Profitability Challenges
Despite strong revenue, CoreWeave remains unprofitable. Net losses reached $863 million in 2024. Running high-performance AI infrastructure is capital-intensive, and profitability will depend on increasing operating efficiency and revenue growth.
Vendor Risk and Dependence on NVIDIA
CoreWeave’s business hinges on securing GPUs from NVIDIA—the sole supplier of GPUs in CoreWeave’s infrastructure. Supply issues, price increases, or technological shifts could disrupt operations. In 2024, three suppliers accounted for 46%, 16%, and 14% of total purchases.

Geopolitical and Market Risks
Global instability—like conflicts in the Middle East and Ukraine and tensions with China—could affect supply chains and customer spending. AI adoption is tied to broader economic conditions—if budgets tighten, CoreWeave’s growth could suffer.
Competitive Pressure
CoreWeave faces growing competition from AWS, Microsoft, Google, and AI labs like OpenAI. If rivals develop better infrastructure or pricing models, CoreWeave’s market position could weaken.
Regulatory and Compliance Risk
AI regulation is evolving. New rules on data privacy, trade, and environmental impact could increase costs and slow growth.
CoreWeave is operating in a high-growth, high-risk environment. Its ability to maintain its current trajectory depends on managing customer relationships, securing supply from NVIDIA, and expanding infrastructure efficiently. While the AI opportunity is massive, CoreWeave faces significant execution, financial, and market risks that could derail its momentum.
Market Opportunity
The AI infrastructure market is massive—and growing fast. CoreWeave is positioning itself at the center of this transformation, targeting the growing demand for AI compute, storage, and workload management. According to Bloomberg Intelligence, the market for AI infrastructure—including inference, training, and workload monitoring—will grow from $79 billion in 2023 to $399 billion by 2028, representing a 38% compound annual growth rate (CAGR).

The breakdown is significant:
$330 billion for training infrastructure (servers, storage, cloud workloads, and networking)
$49 billion for inference infrastructure
$20 billion for workload monitoring
AI isn’t just a tech upgrade—it’s an economic shift. IDC estimates that AI will add $20 trillion to global GDP by 2030, equivalent to 3.5% of global GDP. CoreWeave is positioning itself as the go-to platform for AI labs and enterprises looking to capitalize on this growth.
AI infrastructure demand is driven by two key workloads: training and inference. Training requires high-performance GPUs to process complex models, while inference involves using trained models for real-time applications like chatbots and autonomous systems.
As AI adoption accelerates, the mix of workloads will shift toward inference. CoreWeave’s ability to handle both training and inference at scale positions it to capture a broad swath of this market. Its geographically distributed data centers will allow it to offer low-latency access for inference workloads while maintaining the raw compute power needed for training.
Growth Strategies
CoreWeave’s path to expanding its market share rests on several key strategies:
Deepen penetration with AI labs – CoreWeave already works with some of the biggest names in AI—Meta, Microsoft, and NVIDIA. Expanding relationships with these customers and securing long-term contracts is a top priority.
Expand into new industries – CoreWeave plans to target regulated industries like banking, finance, and pharmaceuticals, where AI adoption is increasing rapidly.
International expansion – Demand for AI infrastructure is growing globally. CoreWeave plans to build out its data center network in key markets to reduce latency and meet local regulatory requirements.
Vertical integration – CoreWeave aims to expand both up and down the stack—offering more software-based AI solutions while enhancing data center capabilities to control costs and improve efficiency.
Maximize infrastructure lifespan – Once contracts expire, CoreWeave plans to monetize unused infrastructure through on-demand consumption or new contract renewals.
Competitive Positioning
The opportunity is enormous, but so is the competition. AWS, Microsoft Azure, and Google Cloud are all expanding their AI infrastructure capabilities. OpenAI is rumored to be developing custom chips, which could reduce reliance on CoreWeave’s infrastructure.
CoreWeave’s edge lies in its specialized focus on AI. While traditional cloud providers offer general-purpose computing, CoreWeave’s platform is purpose-built for AI—optimized for the high-performance demands of training and inference. Its partnerships with NVIDIA and other chipmakers give it access to cutting-edge technology that competitors may struggle to match
Product
CoreWeave is building the backbone for AI infrastructure. Its platform isn’t just cloud computing—it’s a specialized, high-performance AI compute platform designed to handle the unique demands of training and inference for complex AI models. While AWS and Microsoft Azure offer general-purpose cloud solutions, CoreWeave is laser-focused on AI, giving it a strategic edge in a rapidly growing market.
The core of CoreWeave’s offering is its modular platform, which combines compute, storage, and networking, all optimized for AI workloads. Customers, including Meta, Microsoft, and NVIDIA, rely on CoreWeave’s infrastructure to develop and deploy large-scale AI models. The platform’s ability to deliver massive amounts of compute power with low latency makes it critical for the next generation of AI applications.
How It Works
CoreWeave’s platform is built on a network of over 250,000 GPUs housed across 32 data centers worldwide, supported by 360 MW of active power and a total contracted capacity of 1.3 GW. The platform operates at scale, providing the compute power required for both training and inference workloads.

CoreWeave’s infrastructure is organized into three key service layers:
Infrastructure Services – Provides GPU-based compute power, storage, and high-speed networking designed for AI.
Managed Software Services – CoreWeave’s proprietary software automates and optimizes AI workloads, improving uptime and reducing costs.
Application Services – CoreWeave delivers pre-built AI model components and inference frameworks to accelerate deployment.
CoreWeave’s infrastructure is priced on a per-GPU-per-hour basis, with storage sold separately on a per-gigabyte-per-month basis. Long-term contracts account for 96% of revenue, providing stable, predictable cash flow.
Training and Inference
CoreWeave’s platform is designed to handle both AI training (building models) and inference (running models). Training requires massive computational power to process large datasets and refine model accuracy. Inference requires low latency and fast processing to enable real-time decision-making.
Training – CoreWeave’s NVIDIA H100 Tensor Core GPU cluster set an MLPerf benchmark record in June 2023, training a model in 11 minutes—29x faster than the next-best competitor.
Inference – CoreWeave’s distributed infrastructure allows customers to run inference workloads close to end users, reducing latency and improving performance.
Vertical Integration
CoreWeave is expanding both up and down the stack to control more of the AI compute value chain:
Up the stack – CoreWeave is adding more managed software services and pre-built AI models to increase customer engagement.
Down the stack – CoreWeave is investing in data center expansion and power capacity to control infrastructure costs and improve efficiency.
Competitive Advantage
The platform’s ability to deliver low-latency, high-performance AI compute gives it a competitive edge in the AI economy. Its deep relationship with NVIDIA ensures priority access to the latest GPUs, while its long-term contracts provide revenue stability.
The company’s infrastructure is designed to be flexible. As older GPU generations roll off contracts, CoreWeave can repurpose them for lower-priority workloads, maximizing asset life and reducing costs. This structure allows CoreWeave to serve both high-performance training workloads and cost-sensitive inference tasks.

Customers
CoreWeave’s customer base includes some of the most influential names in AI:
Meta – Using CoreWeave for large-scale model training and deployment.
Microsoft – Leveraging CoreWeave’s infrastructure for AI research and product development.
NVIDIA – CoreWeave’s infrastructure is built on NVIDIA GPUs, making it a strategic partner for future AI hardware rollouts.
CoreWeave’s growth strategy centers on expanding its infrastructure footprint and deepening customer relationships. The company plans to:
Build new data centers to meet growing demand.
Expand into regulated industries like banking and healthcare.
Increase revenue from existing customers by securing additional long-term contracts.
Improve platform efficiency to increase margins and reduce operating costs.
Partners
Amplify Labs partners with founders, CEOs, and busy professionals to build authority, generate leads, and grow audiences across LinkedIn, X (formerly Twitter), and newsletters.
We specialize in crafting high-performing written content tailored to your unique voice, goals, and niche—helping you stand out and become a go-to expert in your industry. One of our clients generated 50+ qualified leads from a single post. Another landed inbound interest from a multibillion-dollar company.
Interested in sponsoring these emails? See our partnership options here.
Business Model
CoreWeave’s business model is built around selling high-performance AI infrastructure and managed software services. The company generates revenue by providing access to its CoreWeave Cloud Platform, which delivers compute, storage, and networking power optimized for AI workloads. Pricing is structured on a per-GPU-per-hour basis, with storage billed separately on a per-gigabyte-per-month basis.

How Customers Buy from CoreWeave
CoreWeave operates on a dual model—customers can access the platform through long-term committed contracts or on-demand access:
The majority of CoreWeave’s revenue comes from long-term, committed contracts with large AI labs and enterprises. In 2024, 96% of revenue came from committed contracts, up from 88% in 2023 and 20% in 2022. These contracts typically last between two to five years and are structured as take-or-pay agreements, meaning customers commit to a minimum usage level regardless of whether they use the full capacity.
Prepayment Structure – Most committed contracts require a prepayment, typically covering 15% to 25% of the total contract value. Prepayments are credited against future monthly billings, helping CoreWeave fund infrastructure development.
Revenue Recognition – Revenue from committed contracts is recognized ratably over the contract term, providing predictable cash flow.
Infrastructure Build-Out – Once a contract is signed, CoreWeave secures infrastructure from suppliers (primarily NVIDIA) and installs it over a typical period of three months. Once the system goes live, revenue recognition begins.
CoreWeave has maintained high customer retention rates and contract expansion. Three of its top five customers increased contract value by a combined $7.8 billion within 12 months of signing their original contracts—a nearly 4x increase on initial value.
On-Demand Access
CoreWeave also offers on-demand access to its platform for customers that need additional capacity or have unpredictable workloads. On-demand revenue is billed monthly, based on actual hourly usage. While on-demand revenue accounts for a smaller portion of total sales, it plays a strategic role by allowing customers to scale up during peak demand.
CoreWeave’s infrastructure is one of the most powerful AI compute platforms in the world. As of the end of 2024, the company operated:
32 data centers with over 250,000 GPUs
360 MW of active power capacity, with total contracted power at 1.3 GW
A large fleet of NVIDIA H100, H200, GH200, and the latest Blackwell GPUs
CoreWeave was the first cloud provider to deploy NVIDIA GB200 NVL72-based instances, reinforcing its status as a market leader in AI infrastructure. The company’s ability to quickly adopt cutting-edge hardware gives it a significant advantage over competitors.
Attractive Unit Economics
CoreWeave’s model is designed for rapid cash payback. The company’s average cash payback period (including prepayments) is approximately 2.5 years—meaning CoreWeave recoups its investment in GPU infrastructure within the first half of a typical contract.
Contracts generate stable cash flow due to fixed terms and high customer retention.
Older GPUs are repurposed for inference workloads or on-demand customers, maximizing the economic life of the infrastructure.
Strong supplier relationships and efficient infrastructure deployment help CoreWeave maintain favorable margins despite high capital requirements.
Sales and Go-to-Market Strategy
CoreWeave’s sales model is built around direct enterprise sales:
The company targets large AI labs and enterprises that require massive compute capacity.
Long sales cycles (6–12 months) are typical, but contract expansion is rapid once customers are onboarded.
CoreWeave’s direct sales approach is supplemented by a product-led growth (PLG) motion, allowing individual developers and smaller AI companies to access the platform via on-demand services.
Once a customer is onboarded, expanding contract value is relatively quick. For example, Microsoft and Meta both expanded their initial contract size with CoreWeave within months of launch.
Maximizing Infrastructure Life
CoreWeave’s business model is designed to extract maximum value from its infrastructure:
When contracts expire, GPUs are repurposed for lower-cost inference workloads or sold through on-demand access.
The company’s ability to extend the economic life of infrastructure allows it to maintain high margins even as hardware ages.
CoreWeave’s deep relationship with NVIDIA and ability to secure next-generation chips before competitors give it a significant edge. Its focus on AI-specific infrastructure, rather than general-purpose cloud computing, allows it to deliver higher performance and lower latency for AI workloads.
Management Team:
CoreWeave’s leadership team combines founding vision with deep industry experience. The executive team includes seasoned operators and technical experts with backgrounds spanning asset management, cloud computing, and AI infrastructure. The board includes leaders from finance, technology, and venture capital—providing strategic oversight as CoreWeave scales its AI cloud platform.
Michael Intrator (55) – Chief Executive Officer, President, and Chairman of the Board
Intrator co-founded CoreWeave in 2017 and has led the company since its inception. Before CoreWeave, he was the CEO of Hudson Ridge Asset Management, a natural gas hedge fund. He also served as a Principal Portfolio Manager at Natsource Asset Management, overseeing investments in global environmental markets. Intrator holds a B.A. in Political Science from Binghamton University and an M.P.A. from Columbia University’s School of International and Public Affairs.

Brian Venturo (40) – Chief Strategy Officer and Director
Venturo is a co-founder of CoreWeave and has served on the board since 2019. He was previously the company’s Chief Technology Officer, where he played a key role in building CoreWeave’s infrastructure. Venturo previously worked as a Partner at Hudson Ridge Asset Management and as a Portfolio Manager at Natsource Asset Management, focusing on energy and emissions markets. He holds a B.A. in Economics from Haverford College.
Brannin McBee (39) – Chief Development Officer
McBee, a co-founder of CoreWeave, transitioned to Chief Development Officer in 2024 after serving as Chief Strategy Officer. He previously worked as a proprietary trader at Active Power Investments and Fourth Floor Coastal, focusing on energy and commodities markets. McBee holds a B.S. in Finance from the University of Colorado Boulder.
Nitin Agrawal (45) – Chief Financial Officer
Agrawal joined CoreWeave in 2024 from Google Cloud, where he served as Vice President of Finance. Before that, he was CFO at Mapbox and Finance Director at Amazon Web Services’ Compute Services division. Agrawal holds a Bachelor of Technology from the National Institute of Technology in Kurukshetra, India, and an M.B.A. from Duke University’s Fuqua School of Business.
Investment
CoreWeave has raised over $12 billion in equity and debt financing over the past 12 months, securing backing from some of the most influential names in private equity and venture capital. The company’s investor list includes heavyweights like Blackstone, Magnetar, Coatue, Carlyle, CDPQ, DigitalBridge Credit, BlackRock, Eldridge Industries, and Great Elm Capital Corp.
The most recent raise was a $7.5 billion debt financing in February 2025, led by Blackstone with co-lead participation from Magnetar and Coatue. The financing marks one of the largest private credit deals in history and will be used to expand CoreWeave’s fleet of high-performance GPUs and fulfill existing contracts with major AI labs and enterprises.
This latest raise follows a series of strategic funding rounds that have fueled CoreWeave’s rapid growth:
$1.1 billion Series C in May 2024 led by Coatue
$2.3 billion debt facility in August 2023 led by Blackstone and Magnetar
$1.25 billion committed to CoreWeave’s new European headquarters in London
The caliber of investors reflects CoreWeave’s dominant position in AI infrastructure. Blackstone cited AI and digital infrastructure as among its “highest conviction themes,” underscoring the strategic value of CoreWeave’s compute capacity. Magnetar was the first institutional investor in CoreWeave and continues to back the company’s aggressive growth strategy.
CoreWeave is betting big on AI workloads driving the next era of computing. The company’s cloud infrastructure is built to support the most demanding AI products—offering high-performance GPUs at scale, with software automation that optimizes compute efficiency. The latest financing positions CoreWeave to further expand its data center footprint and meet growing demand from leading AI labs and enterprises.
Competition
The AI infrastructure market is getting crowded, but CoreWeave’s positioning is distinct. While CoreWeave doesn’t have a direct rival that mirrors its specialized focus on AI-specific compute at scale, it faces competition from both general-purpose cloud providers and AI-focused platforms. The competitive landscape breaks down into three main categories:
General-Purpose Cloud Providers
CoreWeave competes with the largest players in cloud computing—Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Oracle Cloud Infrastructure, and IBM Cloud. These platforms offer a broad range of cloud services, including compute, storage, and networking. While their scale and customer base are unmatched, these providers are built for general-purpose workloads, not the specialized demands of AI training and inference. CoreWeave’s edge lies in its laser focus on AI infrastructure, delivering the highest-performance GPUs and optimized compute environments tailored to AI models.
AWS, Google Cloud, and Microsoft Azure have all invested heavily in AI, rolling out their own GPU-based instances and AI training services. However, CoreWeave’s infrastructure is designed from the ground up to handle AI workloads, providing better performance for high-intensity models and large-scale inference.
AI-Specific Cloud Providers
CoreWeave’s most direct competitors are AI-specific cloud platforms like Lambda Labs and Paperspace. Lambda Labs specializes in training large language models, offering GPU clusters on a smaller scale. Paperspace, acquired by Digital Ocean, focuses on machine learning and AI development, with an emphasis on developer-friendly tools.
CoreWeave’s advantage lies in scale and infrastructure depth. Its ability to deliver massive clusters of cutting-edge GPUs—combined with proprietary cloud orchestration and managed services—sets it apart from smaller, AI-focused players.
GPU Hardware and Cloud Providers
Nvidia is both a key partner and a potential competitor. CoreWeave relies on Nvidia GPUs for its infrastructure, and Nvidia has invested in CoreWeave, aligning their strategic interests. However, Nvidia also provides GPUs directly to other cloud providers and could, over time, prioritize its own direct-to-market offerings.
Other competitors in this category include Linode and IBM Cloud, which offer general-purpose cloud computing with some AI-specific capabilities. While these platforms lack CoreWeave’s scale in AI compute, they provide flexible infrastructure options that could appeal to customers with less specialized needs.
CoreWeave’s competitive advantage comes down to its singular focus on AI infrastructure. Unlike general cloud providers that serve a broad range of workloads, CoreWeave’s entire business is built around AI. Its ability to deliver low-latency, high-performance GPU clusters at scale gives it a technical and operational edge.
CoreWeave’s growing customer base, which includes leading AI labs like Cohere, Mistral, Meta, Microsoft, and IBM, reinforces its competitive positioning. Its long-term contracts provide predictable revenue and lock in customers, making it difficult for competitors to displace CoreWeave once embedded.
Financials
CoreWeave’s growth has been explosive, with revenue soaring 737% year-over-year to $1.9 billion in 2024 from $229 million in 2023. This surge was fueled by increased demand from existing customers, new long-term contracts, and the rapid scaling of infrastructure. The company's revenue model is built on multi-year committed contracts, which provide predictable cash flow and operational visibility. Despite this massive growth, CoreWeave remains unprofitable, reflecting the capital-intensive nature of the business and the significant investments required to scale AI infrastructure.
Revenue Breakdown
CoreWeave’s revenue growth is primarily driven by long-term contracts with leading AI labs and enterprises. Over 96% of 2024 revenue came from committed contracts, up from 88% in 2023. This high contract penetration reflects the strategic value of CoreWeave’s AI infrastructure and the stickiness of its customer base.
2024 Revenue: $1.9 billion (+737% YoY)
2023 Revenue: $229 million (+1,346% YoY)
2022 Revenue: $16 million
The growth in revenue demonstrates CoreWeave’s ability to rapidly scale its infrastructure and secure high-value, long-term contracts with major AI players like Meta, Cohere, and Microsoft. More than 95% of the increase in 2024 revenue came from existing customers expanding their capacity needs, indicating strong customer retention and deeper product penetration.
Sales and Marketing
Sales and marketing expenses rose modestly by 42% to $18 million in 2024 from $13 million in 2023, representing just 1% of total revenue. CoreWeave operates a direct named account strategy focused on top AI labs and enterprises, supplemented by a product-led growth (PLG) motion targeting developers.
The minimal increase in sales and marketing spend, despite 737% revenue growth, highlights the strength of CoreWeave’s market positioning and organic demand from high-value customers.

Net Loss and Margins
Despite significant investments, CoreWeave’s net loss margin improved to (45%) of revenue in 2024 from (259%) in 2023. This demonstrates that while losses remain large in absolute terms, CoreWeave is achieving better financial efficiency as it scales.
2024 Net Loss: $(863) million
2023 Net Loss: $(594) million
2022 Net Loss: $(31) million
The improvement in net loss margin reflects CoreWeave’s ability to generate operating leverage from higher contract volumes and improved unit economics.

Payback Period
CoreWeave’s payback period on infrastructure investments is approximately 2.5 years based on adjusted EBITDA generation from committed contracts. The long-term nature of these contracts (averaging 4 years) allows CoreWeave to recover capital costs and generate positive cash flow relatively quickly.
The capital efficiency of CoreWeave’s infrastructure investments positions the company to sustain high growth while improving operating margins over time.
The key challenge remains CoreWeave’s high interest burden and capital requirements. However, the company’s strong customer contracts and improving margins provide a clear path toward positive cash flow and eventual profitability.
Closing thoughts
CoreWeave’s 737% YoY revenue growth reflects soaring demand for AI compute, positioning CoreWeave as a critical partner for leading AI labs and enterprises. Long-term contracts with high-credit customers like Microsoft and Meta provide cash flow stability and operational visibility—rare strengths in such a fast-moving market.
However, CoreWeave’s financial model relies on capital-intensive growth and significant leverage. The company raised $7.5 billion in debt to expand its data center footprint and GPU fleet, driving interest expense to 19% of revenue in 2024. This aggressive strategy could strain margins if AI demand slows or hyperscalers internalize more compute.
The bull case hinges on the continuation of the AI compute boom. If model sizes keep growing and GPU scarcity persists, CoreWeave could reach a $40B+ valuation as the leading independent AI infrastructure provider. Margins would improve with scale and potential in-house data center builds.
The most likely outcome could be that CoreWeave grows into a $20B–$30B business but faces ongoing pressure on margins and capital efficiency. The company could remain independent or become an acquisition target for a hyperscaler seeking to absorb its customer base and infrastructure. CoreWeave’s future depends on balancing growth with capital efficiency—and whether AI demand stays as strong as it is today.
Here is my interview with Anthony Danon, the GP of Rerail – Pre-seed / seed angel fund investing in founders that look to utilize fintech. He was earlier a partner at Speedinvest and an investor at Anthemis
In this conversation, Anthony and I discuss:
What unique value proposition does Rerail offer to founders compared to larger, established fintech VCs?
What’s Anthony’s perspective on the role of AI in shaping the future of fintech?
What are Anthony’s biggest lessons on knowing the right time to sell?
If you enjoyed our analysis, we’d very much appreciate you sharing with a friend.
Tweets of the week
10 bits of wisdom from Charlie Munger:
— Shaan Puri (@ShaanVP)
5:25 PM • Mar 14, 2025
how to be unstoppable at sales:
1) speak less, mine for info
2) sell the sizzle, not the steak
3) believe deeply in what you're selling
4) be the most prepared in a meeting
5) competitor jealousy is one helluva drug
6) get them to say "yes" fast & frequently
7) quiet sales is… x.com/i/web/status/1…— Alex Lieberman (@businessbarista)
2:17 PM • Mar 13, 2025
That extra hour of sleep will 2x your productivity.
Invest in your health = Invest in your startup
— Marc Lou (@marc_louvion)
12:45 PM • Mar 8, 2025
Sage advice from Sam Zell:
— David Senra (@FoundersPodcast)
7:13 PM • Mar 9, 2025
Here are the options I have for us to work together. If any of them are interesting to you - hit me up!
Sponsor this newsletter: Reach thousands of tech leaders
Upgrade your subscription: Read subscriber-only posts and get access to our community
Amplify Labs: We help you grow your audience on LinkedIn, X (formerly Twitter), and newsletters.
Coaching: I offer 1:1 live video consulting calls to give you personalized advice
Subscribe to my YouTube channel, Your Learning Playground with over 350+ podcasts. Previous guests include Guy Kawasaki, Brad Feld, James Clear, and Shu Nyatta.
And that’s it from me. See you next week.