We currently use vllm under the hood and vllm doesn't support Codestral (yet). We're working on expanding our model support. Hence (almost) any model.
Thanks for testing! :)
https://github.com/vllm-project/vllm/issues/6479
- Billy :)
We currently use vllm under the hood and vllm doesn't support Codestral (yet). We're working on expanding our model support. Hence (almost) any model.
Thanks for testing! :)
https://github.com/vllm-project/vllm/issues/6479
- Billy :)