I don't know how I can test the model, but it seem loading worked. When I run `nvidia-smi` on another terminal, I see `5188MiB / 8192MiB` in the memory-usage column.
I managed to run inference locally by installing the requirements and running app.py from the demo: https://huggingface.co/spaces/replit/replit-code-v1-3b-demo/...
It is very fast on my RTX 3070, VRAM usage goes to ~= 6.3GB during inference.