Hacker News new | past | comments | ask | show | jobs | submit login

> For many reasons, that's just too big, we have folks in Europe that can't even clone the repo due to it's size.

What's up with folks in Europe that they can't clone a big repo, but others can? Also it sounds like they still won't be able to clone, until the change is implemented on the server side?

> This meant we were in many occasions just pushing the entire file again and again, which could be 10s of MBs per file in some cases, and you can imagine in a repo

The sentence seems to be cut off.

Also, the gifs are incredibly distracting while trying to read the article, and they are there even in reader mode.




> For many reasons, that's just too big, we have folks in Europe that can't even clone the repo due to it's size.

I read that as an anecdote, a more complete sentence would be "We had a story where someone from Europe couldn't clone the whole repo on his laptop for him to use on a journey across Europe because his disk is full at the time. He has since cleared up the disk and able to clone the repo".

I don't think it points to a larger issue with Europe not being able to handle 180GB files...I surely hope so.


The European Union doesn't like when a file get too big and powerful. It needs to be broken apart in order to give smaller files a chance of success.


Ever since they enshrined the Unix Philosophy into law, it's been touch-and-go for monorepotic corporations.


People foolishly thought the G in GDPR stood for "general" when it's actually GIANT.


My guess is that “Europe” is being used as a proxy for “high latency, low bandwidth” – especially if the person in question uses a VPN (especially one of those terrible “SSL VPN” kludges). It’s still surprisingly common to encounter software with poor latency handling or servers with broken window scaling because most of the people who work on them are relatively close and have high bandwidth connection.


And given the way of internal corporate networks, probably also "high failure rate", not because of "the internet", but the pile of corporate infrastructure needed for auditability, logging, security access control, intrusion detection, maxed out internal links... it's amazing any of this ever functions.


Or simply how those multiply latency - I’ve seen enterprise IT dudes try to say 300ms LAN latency is good because nobody wants to troubleshoot their twisted mess of network appliances and it’s not technically down if you’re not getting an error…

(Bonus game: count the number of annual zero days they’re exposed to because each of those vendors still ships 90s-style C code)


Or high packet loss.

Every once in a while, my router used to go crazy with seemingly packet loss (I think a memory issue).

Normal websites would become super slow for any pc or phone in the house.

But git… git would fail to clone anything not really small.

My fix was to unplug the modem and router and plug back in. :)

It took a long time to discover the router was reporting packet loss, and that the slowness the browsers were experiencing has to do with some retries, and that git just crapped out.

Eventually when git started misbehaving I restarted the router to fix.

And now I have a new router. :)


Sounds, based on other responders, like high latency high bandwidth, which is a problem many of us have trouble wrapping our heads around. Maybe complicated by packet loss.

After COVID I had to set up a compressing proxy for Artifactory and file a bug with JFrog about it because some of my coworkers with packet loss were getting request timeouts that npm didn’t handle well at all. Npm of that era didn’t bother to check bytes received versus content-length and then would cache the wrong answer. One of my many, many complaints about what total garbage npm was prior to ~8 when the refactoring work first started paying dividends.


I can actually weigh in here. Working from Australia for another team inside Microsoft with a large monorepo on Azure devops. I pretty much cannot do a full (unshallow) clone of our repo because Azure devops cloning gets nowhere close to saturating my gigabit wired connection, and eventually due to the sheer time it takes cloning something will hang up on either my end of the Azure devops end to the point I would just give up.

Thankfully, we do our work almost entirely in shallow clones inside codespaces so it's not a big deal. I hope the problems presented in the 1JS repro from this blog post are causing similar size blowout in our repo and can be fixed.


The repo is probably hosted on the west coast, meaning it has to cross the Atlantic whenever you clone it from Europe?


> What's up with folks in Europe that they can't clone a big repo, but others can?

They might be in a country with underdeveloped internet infrastructure, e.g. Germany))


I do t think there’s any country in Europe with internet infrastructure as underdeveloped as the US. Most of Europe has fibre-to-the-premise, and all of Europe has consumer internet packages that are faster and cheaper than you’re gonna find anywhere in the U.S.


There's (almost) no FTTH in Germany. The US used to be as bad as Germany, but it has improved significantly and is actually pretty decent these days (though connection speed is unevenly distributed).

Both countries are behind e.g. Sweden or Russia, but Germany by a much larger margin.

There's some trickery done in official statistics (e.g. by factoring in private connections that are unavailable to consumers) to make this seem better than it is, but ask anyone who lives there and you'll be surprised.


The east has fibre everywhere, but the west is still a developing country(side). Shipping code on a truck would be faster, if you are not on some academic fibre net




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: