>Inference cost - DeepSeek is charging less than OpenAI to use its public API, but that isn't an indicator of anything since it doesn't reflect the actual cost of operation.
Again, the weights are public. You can run the full-fat version of R1 on your own hardware, or a cloud provider of your choice. The inference costs match what DeepSeek are claiming, for reasons that are entirely obvious based on the architecture. Either the incumbents are secretly making enormous margins on inference, or they're vastly less efficient; in the first case they're in trouble, in the second case they're in real trouble.
R1's inference costs are in the same ballpark as Llama 3 and every other similar model in its class. People are just reading and repeating "it is cheap!!" ad nauseam without any actual data to back it up.
Again, the weights are public. You can run the full-fat version of R1 on your own hardware, or a cloud provider of your choice. The inference costs match what DeepSeek are claiming, for reasons that are entirely obvious based on the architecture. Either the incumbents are secretly making enormous margins on inference, or they're vastly less efficient; in the first case they're in trouble, in the second case they're in real trouble.