Empirical Ventures’ Post

Partner Johnathan Matlock was featured in Sifted commenting about the implications of DeepSeek for the AI sector and its impact on the public markets around the world. Read more below. DeepSeek’s claim of developing R1 a type of large language model in under two months, for less than $6 million, starkly contrasts with OpenAI’s reported annual expenditure of over $5 billion. This has disrupted market sentiment because it directly contradicts the notion that AI investors must be prepared to underwrite large capex and opex business models in order to deliver state-of-the-art performance associated with large language models (LLMs). This directly challenges the current valuation of incumbent AI companies like NVIDA who provide key infrastructure components such as GPUs or the companies developing the latest models such as OpenAI. Additionally, R1’s efficient performance and low hardware requirements—including successful demonstrations on consumer hardware—further strain the "AI Capex narrative." The worry from public markets is that this efficiency could lead to downward pressure on capex budgets, ultimately impacting data centre revenue and profit growth and the future profitability of software revenues from application layer users e.g. ChatGPT. While DeepSeek’s R1 performance is undeniably impressive, there is commentary that the medium term AI winners may still be US based. This is due to the chip import restrictions that only came into effect in Oct 2023 and are yet impact the importation of the latest GPUs to China. Many commentators believe that market sentiment has responded to the $6m figure which in reality is likely not reflective of the total cost required to develop R1 and is likely only the cost related to training the model. Additionally, the long term inference capabilities of LLMs will still require the latest GPUs (assuming they remain the preferred hardware of choice). DeepSeek’s R1 has prompted a reevaluation of the LLM market along several lines: Shift Towards Efficiency: Market sentiment that bigger models are inherently better is being tested. This release reinforces the idea that smaller, more specialsed models can compete effectively when optimised for particular use cases. Startups and investors may begin prioritising “efficient AI” over the “scale-at-all-costs” paradigm. Geopolitical Dynamics: We have to be aware that the PR announcement of DeepSeek R1 was timed less than a week after the announcement of Stargate and that $500Bn figure. This speaks heavily to the role that the geopolitical environment will have on the latest AI advancements. https://lnkd.in/e5mvgxsn

Alexander Fink

Principal at Empirical Ventures & The Fink Family Office.

2mo

Very helpful

Like
Reply
Sally Chalk

CEO and Founder at Signapse | AI Powered Sign Language

2mo

Very interesting- thank you. Richard Newman

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics