Hacker News new | past | comments | ask | show | jobs | submit login

I’ll try to check, thanks!

But I usually let things as they are, and never touched pips settingss. If my intervention is required, it is different from ”works out of box”. In my AI cases I’ve seen one of the pips picking from where it stopped occasionally (rare cases where you can re-setup), but that doesn’t go across projects.

The fact that I’m on windows and have four different python installations may add to this issue, but I’m pretty sure and aware which `which python` I’m using when. AI projects tend to download their own minicondas, but it’s unclear why they don’t share a cache (if that’s the issue even). Anyway, windows or not, out of box user experience is barely acceptable.




Update:

All my installations including all "condas" known to me return

  c:\users\user\appdata\local\pip\cache
I just tried to unpack and install the same `text-generation-webui-main` that I installed yesterday and it started to download 2.5gb torch at 12mb/s eta 0:03:07. I made screenshots for those who will claim it's not true. There was also a full ready-to-use in advance 2.5gb file in pip\cache\http\...\...\...\{hashfilename}, which is very fun to navigate to btw. I interrupted the download and activated this new environment:

  [prompt] cd text-generation-webui-main
  [prompt] installer_files\conda\condabin\conda activate installer_files\env
  [prompt] pip cache dir
  c:\users\user\appdata\local\pip\cache
I have no idea how they do that.


Honestly, I don't know what's happening there. My only other guess is that the wheel you're downloading is so big that it somehow busts `pip`'s cache, meaning that you end up always re-downloading it. But I'm not sure there's even a cache limit like that.

It sounds like this might be a bug, either in `pip` or in the caching middleware it uses. You should consider filing a bug upstream with your observed behavior.

Edit: 2.5GB is coincidentally just over the maximum i32 value, so it's possible this is even something silly like a `seek(3)` or other file offset overflowing and resulting in a bad cache.

Edit 2: `pip` is using `CacheControl`, which I maintain. I suspect what's happening is that `CacheControl`'s underlying use of `msgpack` is failing above stores of `2^32 - 1` bytes in individual binary objects, since that's the `msgpack` limit.

https://github.com/msgpack/msgpack/blob/master/spec.md

I'll try and fix that, if that's what it actually is.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: