Hacker News new | past | comments | ask | show | jobs | submit login
Dead googlecode URLs on GitHub (github.com/search)
79 points by ivank on July 22, 2016 | hide | past | favorite | 35 comments



As a reminder to Go package maintainers: Please tag your releases. Many companies* use http://glide.sh for package management. By default, it grabs the last tagged build, not master. We've encountered broken code.google.com links on the last tagged release, and it's not a great experience to override to a hash.

* e.g. Uber uses it https://github.com/uber/tchannel-go


I don't like glide. It manages to recreate the pain of package installation in every other language.

The thing I really love about the golang vendor/ approach is that the dumbest possible way just works - the code is all there.

Glide gets pretty far away from that. The perfect model (or one which would produce the prettiest git diffs) would probably be a tool which kept the actual vendor'd packages under their own branch forks in the same git repo and checked them all out when you need them, just so vendoring changes wouldn't wind up being huge source tree diffs).


I've seen the complaint about huge diffs a few times. Why not just exclude vendor directories from the diff?


Can Github (and Github Enterprise) do this automatically? Basically, like most problems, the issue is that it interferes with a very common interface rather then being a real problem.

If I could have PRs by default fold vendor/ folders for me, then yeah - problem solved more or less.


This looks like a failure in the go package management methodology.


Go has no package management philosophy. It works well if you're using Bazel/Blaze. Not so well if you're grabbing a library off of Github.


As a note: "Uber" is not a good citation for "Many companies". ;)

But I totally lament the lack of use of tagged releases on GitHub. Sometimes I am looking for versions before some things but after others, for some reason, and it can be very hard to find in a commit list where the code was stable/functional. Depending on the repository, you often can't trust current master to be stable code either.


Totally fair - my point was that users are beyond just early adopters. Glide doesn't keep a good record of public users.


A guy I know years ago insisted that it is far safer to put your content in the cloud than on your personal website, as big corporations will not go down. Something like this will happen to content on Facebook and GitHub as well, sooner or later.


Geocities is gone. Your old MySpace profile, photos and wall. All your digg comments and submissions if you ever used that.

I host all my own stuff, but unless I get my shit together and create a set of docs for someone I trust, once I die, my domain, e-mail and all my blogs will disappear and accounts expire and credit card payments fail -- except for whatever wayback might tag important enough to index.

The great libraries of Greece, the technology used to build the Pyramids of Giza, the process the Romans used to make concrete ... all lost. Sure we can reconstruct and examine, but so much is filling in the gaps with guess world. If some alien race found our hard drives and CDs long after we had gone extinct, where would you even begin to decode the data without a rosetta stone?


To save a web page with the wayback machine, append its URL to http://web.archive.org/save/ and GET the resulting URL.

I have two archive.org webjumps for the conkeror browser:

  define_webjump("wayback", "http://web.archive.org/*/%s");
  define_webjump("wbsave", function(url) { return "http://web.archive.org/save/"+url; } );


Your old digg data: http://digg.com/archive


..but when I entered my email address there the page told me they had sent me an email with further instructions. 3 hours later - no email. :/

Rant from a person who is not in Silicon Valley:

This is so fucking typical of SV. No commitments, no stability, no personal pride in keeping things working for decades if needed. Just the hunt for the next thing, and fuck whatever you worked on last month.


Man that'd be awesome, if I didn't delete the e-mail address used with that username :(


I've been considering the historical aspect of social networks and "personal pages" as an attack vector for a while (with respect to bad practice for removal). What happens when person "xyz" registers a domain, builds a personal site, and begins to signup for "whatever.com" under that email and then the domain lapses?

Re-registering the domain would be easy and capturing/selling the credentials would be easy. Once you get an email running a "forgot password" on the "to" address across the top 500 domains might yield something fun. Also catching these specific type domains in drop would be easy with firstname/lastname scan. Cheap as well. Basically fraud based domain squatting.


Similar concerns were raised a couple of years ago when Yahoo thought (foolishly) that it would be a good idea to "recycle" email addresses:

http://www.cnn.com/2013/06/20/tech/web/yahoo-recycled-email/


Do they still do it? That's horrific


Hotmail/Outlook used to(?) do this as well.

I had an account expire ~10 years ago. Somebody else registered the address and used it for several years, then eventually let it expire as well. Last year, I re-registered it, and used it to recover access to an account I created a long long time ago.


I'm still trying to recover my neopets account from 15 years ago, I used a bad birthdate


Actually, this is not a new idea. Entire block of IP adresses are known to have been highjacked this way and are now used for spam purposes. Squatters reclaim the block with various ploys including registering an abandoned domain name to accept email to the point-of-contact domain contact. [0]

[0] https://www.spamhaus.org/faq/section/DROP%20FAQ#258


I know it's kinda a life commitment, but if I ever used a domain as an identity representation of me... I'm going to keep it forever.


Interesting but does it matter? Commented out anyway.

  <!--[if lt IE 9]>
  <script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
  <![endif]-->
Edit:

I know it's only commented out for > ie8. But right now, that's nearly everyone. And I personally still write ie8 compatible code for what it's worth. Stuck on old jquery, old angular, all sorts of shims, etc.

Not saying it is the definitive truth but w3schools puts it at 0.1% < ie9.

http://www.w3schools.com/browsers/browsers_explorer.asp


W3School's browser usage stats only reflects traffic on their site, which is a heavily biased sample. The best source I've found for browser stats is gs.statcounter.com, which puts IE8 usage at a bit over 1%.


It's only commented out for non-IE browsers and IE versions >= IE 9. See http://www.quirksmode.org/css/condcom.html

Edit: Fixing logic


It's the other way around: it's only enabled for < IE 9.


That's what I meant to say haha, I got distracted mid-comment and forgot I was saying it was commented out.


The w3schools data is most likely flawed as the majority of people accessing it are WebDevs who would not be using lt IE 9


You write IE8 code and don't recognize <!--[if lt IE 9]> ?


Of course I recognize it. What I'm inferring is ie8 is near extinction (I hope) and it’s just comments now.

I've been writing software used in the medical field. And apparently they have a hard time letting go of ie8 and my team hasn't been given the thumbs up to forget about ie8 yet.


Isn’t this part of Bootstrap, HTML5Reset etc?


I think it used to be part of HTML5 Builerplate.



Some job site should really build a pull request bot.


Or why not some enterprising job seeker? That'd be one thing to put on the resume. "Contributed fixes to thousands of Github repositories."





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: