I'd suggest installing an old version of CUDA and pytorch using it for machine learning anyway? It's still possible to learn the principles with an older GPU like this, although you'll get less than half the performance of a bottom of the range Ampere.
Every time I tried this way, because of no budget, I got obsolete apis and libs include old glibc, old linux versions to return to GPU age, hours of compiling from the sources to run on new hardware and kernel. As a reward you will get a lot of internal knowledge but rare chanse to see CUDA and pytorch works as expected.
I could, yes. Yet it makes more sense to run what I need on my work laptop or if I ever really need it buy a could GPU on demand, since I'm not fiddling with the lower layers of ML tools, but learning the basics.