Vibe coding startups: valuations grew by 350% in one year, huge revenue multiples

Cognition, Lovable, Replit, Cursor, and Vercel: These are the startups currently being showered with massive amounts of money by investors. What they have in common: Based on LLMs, they enable the automation of coding to the point where non-programmers can create websites or web apps using simple text prompts. These young companies are currently experiencing tremendous interest, which is why hardly a week goes by without a new unicorn or even decacorn (valued at least $10 billion) emerging.
While vibe coding startups had a combined valuation of approximately seven to eight billion dollars in August 2024, one year later, their valuation had already reached more than 36 billion dollars—a growth of 350% within a single year (or 4.5 times). Taken together, these startups, most of which are barely two or three years old, have a combined annualized recurring revenue (ARR) of at least 800 million dollars, and this is with enormous growth.
Investors pay enormous revenue multiples
The enormous growth of these companies has led investors to pay very high valuations in some cases, in the expectation that the startups can grow into these larger shoes in the coming years. Based on announced ARR figures, investors bought shares at ARR multiples ranging from 14x to 140x. Here are the revenue multiples based on publicly available figures:
- Devin (Cognition): Valuation $10.2 B, ARR $73 M → Multiplier ≈ 139.7x
- Cursor (Anysphere): Valuation 9 B$, ARR 200 M$ → Multiplier ≈ 45.0x
- V0 (Vercel): Valuation 9 B$, ARR 200 M$ → Multiplier ≈ 45.0x
- Replit: Valuation 3 B$, ARR 150 M$ → Multiplier ≈ 20.0x
- Lovable: Valuation $1.8 B, ARR $130 M → Multiplier ≈ 13.8x
Company | ARR M$ | Valuation B$ | Total Funding M$ |
LLM Provider | Investors | HQ |
Devin (Cognition) Owner of Windsurf (Ex Codeium)
|
73 | 10.2 | 896 | OpenAI, Anthropic | Founders Fund, Lux Capital, 8VC, Elad Gil, Definition Capital, Swish Ventures | San Francisco, CA |
Cursor (Anysphere) | 200 | 9 | 1,100 | OpenAI, Anthropic, Google | Thrive Capital, a16z, OpenAI | San Francisco, CA |
V0 (Vercel) | 200 | 9 | 563 | v0 (own) | Accel , 8VC , Flex Capital , SV Angel , Salesforce Ventures | Covina, CA |
Replit | 150 | 3 | 472 | Anthropic, Google | Prysm Capital, Amex Ventures, Google, YC, Craft, a16z, Coatue, Paul Graham | Foster City, CA |
Lovable | 130 | 1.8 | 222 | Anthropic, OpenAI, Google | Accel, Creandum, byFounders, Hummingbird Ventures | Stockholm, SWE |
Magic | – | 1+ | 466 | LTM-1 (own) | Eric Schmidt, Atlassian, CapitalG, Jane Street, Sequoia, Nat Friedman, Daniel Gross, Elad Gil | San Francisco, CA |
Augment | – | 1 | 252 | Anthropic | Lightspeed Venture Partners, Index Ventures, Innovation Endeavors, Meritech Capital Partners, Sutter Hill Ventures | Palo Alto, CA |
Bolt.new | 40+ | – | 105 | Anthropic | Emergence, GV, Madrona, The Chainsmokers (Mantis), Conviction | San Francisco, CA |
Of course, the major LLM providers are also in the race for Vibe Coding. OpenAI has integrated Codex into ChatGPT, Anthropic offers Claude Code, and Google has Gemini Code Assist. And, of course, the LLMs from the major AI providers are also integrated into many Vibe Coding platforms—which raises the question of whether the startups will ever be able to break even, since they largely pass on their revenues to Anthropic, OpenAI, and others.
Sweden’s Lovable is remarkable in this regard: Its team of just 60 people generates an ARR of $130 million—more than $2 million per employee. The multiple investors paid in the last financing round is “only” around 14x—accordingly, Lovable’s valuation, currently at $1.8 billion, could very well grow to $4 to $5 billion very soon and very quickly.
Meanwhile, new vibe coding platforms are constantly coming onto the market – most recently, Instance from the Viennese startup Mimo or Floot from the current Y Combinator batch.
Own coding models from Vercel and Magic instead of Anthropic and Co
This is precisely why startups that have trained their own AI models are so interesting. These include:
- Vercel developed its own composite LLM design for v0 because pure frontier models would quickly become outdated with web framework updates and are not optimized for web app-specific goals like fast edits or automatic error correction. Furthermore, fine-tuned open-source models would lag significantly behind proprietary base models when it comes to multimodal code generation, which is why the v0-1.5 and v0-1.0 models are fully compatible.
- Magic is building its own LLM for coding, designed for ultra-long contexts, because code quality and understanding would increase dramatically if a model could use up to 100M tokens of project-specific context (entire repo, documentation, libraries) during inference, rather than primarily “learned” knowledge. For this purpose, Magic is developing an architecture (LTM-2) that would achieve massive cost and memory savings for very long contexts compared to classic Attention, according to the company founded by the two Austrian founders, Eric Steinberger and Sebastian De Ro.
These proprietary coding models are exciting to observe. While many well-known providers today, such as Lovable and Cursor, rely heavily on Anthropic’s LLMs and rely on their pricing and capabilities, Vercel and Magic are taking a different approach. Vercel offers the v0 models as an alternative to other LLMs, while Magic’s direction is not yet clear. Steinberger and De Ro could try to win Lovable, Cursor, and others as customers, provided they manage to build a coding LLM that is superior to Claude Sonnet 4.0 and the like—especially in terms of price.