By Krystal Hu and Stephen Nellis
(Reuters) – Specialized cloud computing provider CoreWeave has raised $200 million in funding from its existing investor Magnetar Capital, highlighting investor interest in backing infrastructure powering the generative AI boom.
The funding, which valued the company at more than $2 billion, comes weeks after CoreWeave raised $221 million from investors including Magnetar Capital and Nvidia.
CoreWeave specializes in providing cloud computing services based on graphics processing units (GPUs), the category of chip pioneered by Nvidia that has become central to artificial intelligence (AI) services like OpenAI’s ChatGPT.
The capital will be used to fund the company’s expansion to build out six data centers across the U.S. and for potential acquisition opportunities, Brannin McBee, CoreWeave co-founder and chief strategy officer said in an interview.
CoreWeave is in talks with lenders to raise debt for GPU purchases, he added, without giving details.
CoreWeave previously focused on serving the cryptocurrency mining market but has seen sharp growth since moving toward AI, where several startups have raised more than $12 billion in a gold rush kicked off by ChatGPT in the first quarter of 2023.
CoreWeave sells computing power to those AI companies, competing with cloud computing service providers such as Microsoft Azure and Amazon’s AWS.
CoreWeave aims to stand out by building its data centers differently for AI work, using a networking technology called InfiniBand to link computers together instead of Ethernet cables that are the current standard in most data centers, McBee said.
InfiniBand, popular in supercomputers, is known to generate high performance with low latency.
Mcbee also pointed out that his firm had a close partnership with Nvidia. Having access to chips supply is key in a market where Nvidia has strained to meet demand for its AI chips.
CoreWeave has put Nvidia’s latest H100 chips into use before many of its larger rivals.
“We’re the only provider of that infrastructure at scale and an InfiniBand environment,” McBee said. “Nvidia wants their products to be as good as possible and to be accessible. We hit both those check boxes for them the way that no one else does.”
(Reporting by Krystal Hu in New York and Stephen Nellis in San Francisco; Editing by Himani Sarkar)