LLMs are deterministic with 0 temperature on the same hardware with the same seed though, as long as the implementation is deterministic. You can easily use the OpenAI API with the temp=0 and a predefined seed and you'll get very deterministic results
temp 0 means that there will be no randomness injected into the response, and that for any given input you will get the exact same output, assuming the context window is also the same. Part of what makes an LLM more of a "thinking machine" than purely a "calculation machine" is that it will occasionally choose a less-probable next token than the statistically most likely token as a way of making the response more "flavorful" (or at least that's my understanding of why), and the likelihood of the response diverging from its most probable outcome is influenced by the temperature.