Chinese AI company DeepSeek has shared cost and revenue estimates for its popular AI models, V3 and R1, revealing a potential cost-profit ratio of up to 545% per day.
However, the company cautioned that its actual revenue is significantly lower than the theoretical estimate.
This is the first time DeepSeek has provided insights into its profit margins for “inference” tasks, where trained AI models perform actions like generating text in chatbots. The company’s claims could further impact global AI stocks, which dropped in January after DeepSeek’s models gained international attention.
Investors were surprised when DeepSeek disclosed that it spent less than $6 million on Nvidia H800 chips to train its models—far less than U.S. competitors like OpenAI, which have invested billions in more powerful hardware. This has raised questions about the necessity of such high spending in the AI industry.
In a GitHub post, DeepSeek estimated that renting H800 chips costs $2 per hour, leading to a total daily inference cost of $87,072. Meanwhile, the models’ estimated daily revenue could reach $562,027, potentially generating over $200 million per year.
However, DeepSeek clarified that actual revenue is lower because some services remain free, and pricing varies based on usage and demand. Despite this, the company’s efficient approach to AI model deployment challenges the spending strategies of major U.S. AI firms.
Bijay Pokharel
Related posts
Recent Posts
Subscribe
Cybersecurity Newsletter
You have Successfully Subscribed!
Sign up for cybersecurity newsletter and get latest news updates delivered straight to your inbox. You are also consenting to our Privacy Policy and Terms of Use.