Hacker News new | past | comments | ask | show | jobs | submit login
Git-scm.com status report (marc.info)
157 points by cnst on Feb 2, 2017 | hide | past | favorite | 106 comments



    > We (the Git project) got control of the git-scm.com domain this year. We
    > have never really had an "official" website, but I think a lot of people
    > consider this to be one.
So, uh, git-scm.com wasn't an official website all these years?


git history says it's been (kinda?) official since 2009:

https://github.com/git/git/commit/69fb8283937a18a031aeef12ea...


It was set up and run by GitHubbers for advocacy/evangelism.


Maybe so, but since git-scm.com's 2012-05-05 redesign [1], the professional visual design, the new footer, the internally-hosted download pages, and the verbiage has no longer given the impression of an "unofficial" site. Compare with the old design used until 2012-05-04 [2].

(edit: static screenshots at http://imgur.com/a/OCjxY)

[1] http://web.archive.org/web/20120505190309/http://git-scm.com... [2] http://web.archive.org/web/20120504151545/http://www.git-scm...

----

more goodies:

first commit of new design: https://github.com/schacon/git-scm/commit/3bcc818433c6ae94dc...

some design work for git-scm.com: https://dribbble.com/jasonlong/projects/40112-Git-Site-Redes...


I was going to bring it up in my original comment, but felt that it might come off as too negative: GitHub has made out handsomely by unsuspecting folks conflating Git and GitHub. So it's not really surprising. More expected.


If you didn't see it, there was another email sent out that discusses this and other issues with the Git mark [0]

The trademark policy can be found at [1]

A few quotes to summarise:

----

We approached Conservancy in Feb 2013 about getting a trademark on Git to ensure that anything calling itself "Git" remained interoperable with Git.

While the original idea was to prevent people from forking the software, breaking compatibility, and still calling it Git, the policy covers several other cases.

One is that you can't imply successorship. So you also can't fork the software, call it "Git++", and then tell everybody your implementation is the next big thing.

Another is that you can't use the mark in a way that implies association with or endorsement by the Git project. To some degree this is necessary to prevent dilution of the mark for other uses, but there are also cases we directly want to prevent.

The USPTO initially rejected our application as confusingly similar to the existing trademark on GitHub, which was filed in 2008. While one might imagine where the "Git" in GitHub comes from, by the time we applied to the USPTO, both marks had been widely used in parallel for years. So we worked out an agreement with GitHub which basically says "we are mutually OK with the other trademark existing".

So GitHub is essentially outside the scope of the trademark policy, due to the history. We also decided to explicitly grandfather some major projects that were using similar portmanteaus, but which had generally been good citizens of the Git ecosystem (building on Git in a useful way, not breaking compatibility). Those include GitLab, JGit, libgit2, and some others. The reasoning was generally that it would be a big pain for those projects, which have established their own brands, to have to switch names. It's hard to hold them responsible for picking a name that violated a policy that didn't yet exist.

----

I too have come across the Git/GitHub confusion far too many times, and it is extremely unfortunate.

The worst aspect in my opinion is that because of this confusion a lot of the beauty and utility of Git, as a truly distributed version control system, is missed or not understood; it's assumed that using Git is the same thing as storing your code on a specific hosted service.

That said I think the Git Project Leadership Committee is doing a fantastic job and have never had cause to question any of the decisions they've made nor the direction they seem to be taking the project, in this issue and others.

[0] http://public-inbox.org/git/20170202022655.2jwvudhvo4hmueaw@...

[1] https://git-scm.com/trademark


Indeed, the other day I mentioned we'd moved our code into git (from, yuck, perforce) and my girlfriend said, "oh, GitHub."


It's like "hash" and "hashtag" all over again. sobs


Fight back: call them "Sharp designators" (or if you're a splittist, "pound marker"


No, please.

the pound is £

# this really is hash ;)


Splittist! The People's Liberation Front of Lingustics fundamentally condemns you members of the Linguistic People's Liberation Front!


That's pound sterling. The other is pound marker. Why should pound sterling be the 'pound'?

Maybe pound is lb


"Octothorpe" is the One True label.


This is very interesting. I run https://www.gitignore.io and this post highlights a lot of interesting things about git-scm.

1. GitHub is footing the bill — I'm paying for gitignore.io (although it's only costing me the annual domain)

2. The site uses 3 Dynos — Currently gitignore.io uses 1 Dyno on the free tier and I've recently moved the backend from Node to Swift to double / triple network performance based on my preliminary testing. I don't know why the site needs 3 Dynos because like the OP mentioned, it's a static site. I also use Cloudflare as a CDN which could dramatically improve git-scm's caching layer. It's not that helpful for me as most of my requests are dynamically created, but for a static site, it would drastically reduce Dyno traffic.

3. Access to Heroku seems to be an issue — I ran into the same problem and I'm finishing up a full continuous integration process to build and test my site on Travis. I basically want to approve a pull request and have the site fully tested though my Heroku pipeline, then have the PR landed in production.

4. Traffic - I don't know how many users he's got but I'm seeing about 60,000 MAU's and about 750,000 requests a month.

* Jason Long helped design my site and logo as well.


Your site doesn't work for me. It doesn't matter what I enter, (I tried Delphi, C#, Visual Basic, Windows) I only get "No result found". Also your Tutorial Video doesn't match your current design. The colour scheme is off and instead of a "Generate" dropdown button there is a plain "Create" Button.

I'm on Chrome Version 55.0.2883.87 m on Windows. I turned of uBlock Origin for your site.


You may be experiencing some caching problems. Also around the time you checked I was in the middle of pushing the swift site to Heroku so that may have introduced some issues as well. The new site has a "Create" button with no drop down — Can you please re-try https://www.gitignore.io/.


OK, I tried it at home. Another PC, another Provider, Chrome Version 56.0.2924.87, no uBlock, I even tried it in Inkognito Mode. The result is the same. https://imgur.com/a/Bo4Kq


> like the OP mentioned, it's a static site.

It's a RoR app. Content might not change but it's still an app with a database etc.


Although it is a RoR app you still can cache a lot of thing out of the box. As others have pointed out, just using cloudflare as CDN would already help somewhat.

Recently I've seen a blog post about a company using RoR for generating a completely static website. Unfortunately I can't find it anymore.

The naive approach would be to use "wget -r" a locally running and "rsync" the generated html to the server but there might be some gotchas with that.


Can you be more specific on how Swift doubles/triples your performance? Are you using a Swift backend framework for web serving?


Ah yeah. I'm going to write a blog post on this in a few weeks, but basically I'm running tests using `wrk`. Here are the results I just got with a test (this is actually a bit abnormal), but note the requests per second, latency, and the transfer speed per second.

  wrk -t12 -c400 -d10s https://gitignoreio-stage-swift.herokuapp.com
  Running 10s test @ https://gitignoreio-stage-swift.herokuapp.com
    12 threads and 400 connections
    Thread Stats   Avg      Stdev     Max   +/- Stdev
      Latency   265.88ms   78.37ms   1.29s    84.87%
      Req/Sec   119.55     39.81   254.00     67.32%
    14230 requests in 10.09s, 79.67MB read
  Requests/sec:   1410.00
  Transfer/sec:      7.89MB


  wrk -t12 -c400 -d10s https://gitignoreio-stage-node.herokuapp.com
  Running 10s test @ https://gitignoreio-stage-node.herokuapp.com
    12 threads and 400 connections
    Thread Stats   Avg      Stdev     Max   +/- Stdev
      Latency   945.65ms  136.05ms   1.29s    73.12%
      Req/Sec    35.57     21.97   140.00     65.52%
    3783 requests in 10.09s, 19.31MB read
  Requests/sec:    374.81
  Transfer/sec:      1.91MB

And yes, for my Swift backend, I'm using Vapor[1] and the source code is here: https://github.com/joeblau/gitignore.io.

[1] - https://vapor.codes


I am always amazed at how quickly Heroku gets prohibitively expensive when you start scaling.

When I ran https://jscompress.com/ on Heroku, I was up to $100 per month for 2 2x Dynos. Completely absurd for a simple one-page Node.js app. I put in a little work moving it to DigitalOcean, and had it running great (and faster) on a $10 VPS.

I get the appeal of Heroku (I have used it several times), but man sometimes it feels like gouging when you can least afford it.


Heroku is like a cradle: it gives you instant comfort and feeds you all you need without effort when you're newborn. But when you're ready to start walking by yourself, it definitely will strand you.


Conservatively, the people in this conversation make $5 million per year total, the software (git) contributes billions, and we are discussing how to better allocate $230 per month. Open source economics is fascinating.


If you think about it in terms of hourly contracting rate, the opportunity cost of participating in this conversation could easily cost more than $230.


Speaking of popular software that doesn't have its own website, the PuTTY developers have never bothered with getting a domain name specifically for PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/). I'm not actually entirely sure what the rest of the site/domain is meant to be for either.


Pretty sure PuTTY is a one man job and the page is just a personal one hosted on chiark.greenend.org.uk, which seems to be just a server run by his mate


FWIW: PuTTY does have a webpage (http://www.putty.org/), but it just redirects the download link to the site you mentioned


putty.org is not affiliated with the PuTTY project. The domain is registered by one of the founders of bitvise, which explains the advertising for their products on the site.


I wonder how hard it would be to convince folks to drop their expensive setups in favor of nearly $0 static sites, as well as how much up front cost they'd be willing to shovel out for the transition. S3 + CDN (+ Lambdas optionally) feels really ready to me for almost any straightforward "website." For most things GitHub/Lab pages is an easy path to that.


A lot of folks who have static websites aren't technical and invested thousands of dollars for a WP website. If their site already runs, you wouldn't be providing anything.


...unless they have to pay noticeable money for WP hosting due to serious load, or were bitten by security problems.

A free business idea: write a converter from WP to Jekyll (or Hugo) that converts 95% of a typical WP site right, sell total conversion services, maybe reselling hosting, too.


https://github.com/davidbanham/wp-to-wintersmith

Here's one I wrote a few years ago to go from Wordpress to Wintersmith.

The Wordpress export format is pretty gross, though.


I have several WordPress sites (mostly "abandoned"/no longer updated) that I would love to convert to static sites. I would happily pay a decent sum for a tool that could completely handle that conversion/migration.


I used https://de.wordpress.org/plugins/simply-static/ to convert a WP site with 10.000 pages. Took about 60 minutes so i had to increase PHPs max_execution_time. I also tested some other plugins before but most don't worked for me.

I had to turn off HTTPS for the generation time and reportet it as a bug to the developers.

I also converted another WP blog to Markdown and used it with a static site generator. My new site is not online yet, but here is the source of an article aout it: https://github.com/davidak/davidak.de/blob/master/pages/word...


Can't you just dump the website using wget ?


You can also use this but havn't testet it.

wget -N --recursive --page-requisites --html-extension --convert-links https://example.com/


The idea is to keep the site inside a CMS, but a different CMS, one that generates (most) pages statically rather than on-the-fly.


If it were my site, I think I wouldn't even bother with the search: just stick it all on S3, and have one of those 'Google custom search' or similar boxes, so it's static as far as your site's concerned, and just redirects to Google with the `site:foo` filter.

I don't really have a handle on what S3 costs 'at scale', but I think I'm willing to bet it would knock at least the 0 off the end.


Search could easily be done if content is indexed in json files.

CDN hosting of a static site is nearly $0 so def the best option in this case. Plenty of providers give free PRO service to OSS projects as well (i.e. netlify)


Lunr.js is designed for exactly that use-case and the example shows that workflow pumping a bunch of documents into an index and saving it as JSON which is loaded client-side:

https://github.com/olivernn/lunr.js


Another JS search for static websites is http://www.tipue.com/search/

It works very well for me on a site with about 300 pages.


Probably the best/cheapest solution is:

A) Get a Linode VM, put elastic search on it and have it load the text. Probably $20/month with that little text, tops.

B) Use something like KeyCDN to cache everything for long periods of time.

I doubt it'll cost $50/month.


A linode vm's cost is not measured in its price tag but in how much effort it takes to maintain. Updates, security advisories, hypervisor crashes, crashes under sudden unexpected load, crashes because you're on vacation and you are having too much uninterrupted fun, ...

As a famous writer once said: "ain't nobody got time for dat."


How much money is git-scm going to lose if the search function is temporarily down?

Maybe not enough to afford a solution with more nines.


The money it costs to get someone to fix it. That person isn't free, unless this is their hobby.

It's not about uptime, it's about the service existing at all. Even if you are fine with an hour downtime per day; at the end of that hour, you'll still need to bring it back online. The "aaS" part is about taking that load off. The nines are a corollary.

Concretely: a dedicated server leaves you with a lot of extra work. A packaged, managed service doesn't. E.g.: s3 for static files vs an nginx server on a VM. Even if they cost the same, s3 would still be a better option for git-scm.org.


> A linode vm's cost is not measured in its price tag but in how much effort it takes to maintain. Updates, security advisories, hypervisor crashes, crashes under sudden unexpected load, crashes because you're on vacation and you are having too much uninterrupted fun, ...

If I had a choice between that and ~$1800/year?

If you don't want to do sysadmin work at the rate of ~$300/hr, that is your business.


The generated Rust API docs, which are uploaded to https://doc.rust-lang.org/std/ (but also the same ones you can download), do something where they generate an index of the entire site as a JavaScript object, so searches can happen client-side. So it's a static website, but search functionality works.

See https://doc.rust-lang.org/search-index.js for the messy back-end.


sounds nasty for blind people andor people using lynx


Blind people don't have javascript?


Why even bother paying for S3? Just use a static site generator and put it on github pages, github is already paying for the heroku vms so I bet they'd be happy to pay less to use their own infrastructure even if the git-scm.com/org site uses more than their bandwidth requirements.


Github pages just seems a step too far - sponsoring the hosting, fine, but I don't think anyone should go out of their way to make git depend on GH.


Well, I was thinking the same, if open source projects from facebook have all the documentation hosted using GHP why not git? And for the search option they use algolia and I think it's free for documentation projects.


I've used Algolia [1] and Swiftype [2] in the past for search on static websites, but recently experimented with lunr.js [3] for our company playbook (built with GitHub pages): http://playbook.wiredcraft.com/

I still want to separate the index in its own JSON file, but so far, so good. Search is fast and index is rebuilt automatically.

[1]: https://www.algolia.com

[2]: https://swiftype.com/

[3]: http://lunrjs.com


Why bother with S3? I'd buy a Raspberry Pi, plug it in at home and call it a day.


Pedantic analysis:

At 5 Watts and $0.30 per kilowatt-hour, a Raspberry Pi would cost $1.08/month to run.

With 1 free GB and $0.09/GB-month, S3 would be able to deliver 13 GB/month at a cost of $1.08/month.

So, RasPi at home gives you "unlimited" egress and a fixed cost, but you have increased latency, a rather small outgoing bandwidth (most likely), and all the downsides of running your own server.

S3 gives you unlimited bandwidth, low latency, and no server maintenance, but it's only competitive on price if you don't exceed 12GB of egress.

Overall I prefer S3, even if I think their egress prices are ridiculous. RasPi at home has some geeky cool factor, though...

NOTE: $0.30/kwh is basically what I pay (California) for any additional usage. These equations will favor home hosting if your electricity is cheaper.


I'd lower that 5 watts estimate. RPi3 idles at around 1.25W and goes to 3.75W at full load.


A raspberry pi is also not a CDN. Why even manage servers when all you need is a static site? Deploy to a CDN and if properly configured, everything just works.

Especially with the site being considered in 'maintenance mode', I doubt they want to manage a server aside from the other things they need to do.


> I'd buy a Raspberry Pi, plug it in at home

I don't know how I'd feel about a public resource being at the mercy of some user's comcast connection.


It's usually against the EULA of internet providers.

You can buy a hosted moral equivalent of an RPi for peanuts, e.g. https://www.scaleway.com/pricing/.


As much as I enjoy the Raspberry Pi, I doubt if it could handle the traffic or if it would last long with the limited write cycles of an SDcard.


I know someone who optimized searching the leaked Adobe database (of a few hundred gigabytes if I remember correctly) on a Pi to a sub-second search. That was super impressive and the method used wasn't even obscure (binary search). The same doesn't apply here, but I'm trying to say a Pi isn't entirely worthless.

For example, what pages get accessed the most anyway? I'm guessing the latest source code and maybe the latest release, though most people probably just apt-get git instead so it's probably mostly the source code. Then there are man pages and some other info pages, if I remember correctly. Sounds like the latest release + 90% of those text pages can easily fit in RAM. So memcached? Nah, the Linux kernel happily caches the files that you read from disk.

I don't know the actual numbers but it doesn't sound infeasible to me. A $230 hosting bill is very heavy though, I guess you'd need some serious fiber as well to provide the uplink. But again, without numbers it's all "maybe" and "probably".


Yeah, that was my project, it worked quite nicely. However, the biggest reason was that my Raspberry Pi was my only server at that point, and besides, tranferring 10GB out of my home connection to a VPS would've been too slow for my impatient self. I think the file was stripped of all other data (which brought it down to about 4GB) and I put it on a thumb drive instead.

Nowadays I would have used a VPS for that. The point with S3 or any other cloud solution is that they make sure you're up and running. Even though it might still be useful, the need for good service monitoring is as good as gone when moving to one of these cloud based platforms. And then I'm not even taking into account the time you have to invest in setting up and configuring a RPi properly vs just pushing a repo to Github Pages, or uploading a zip file to S3.

Heroku or other cloud platforms can be crazy expensive, but for static file hosting, S3 or Github Pages is more than enough and quite affordable.


You could put the site itself on an external drive via USB - and ideally most of it the access be read-only for a static site, so the life of the sdcard shouldn't be as big of a concern.

That said I'd agree, Raspberry Pi's a great but not quite fast enough for serving a high-traffic website.


Raspberrypi.org hosted the entire site using raspberry pi's for a while.

I've just checked and it looks like they nolonger do so though...

    $ curl -I https://www.raspberrypi.org 2>/dev/null  | grep X-Served-By
    X-Served-By: Blog VM 2
Fun while it lasted!


Not sure if it's a sarcasm, but upvoted just in case if it is.


That's not sarcasm. It's not like anyone here has numbers to say I'm wrong, and I'm not hearing any good arguments as to why it's a bad idea. I've had my blog (which does 3 mysql queries for each pageload) hit the HN front page with 105KB/s uplink and either a Pentium 3 or Intel Atom as CPU (I forgot when I switched servers). In any case, something much like a Pi can handle HN front-page traffic with a database. And people are downvoting hosting a website that's even less heavy on a Pi? Unless it's 500 hits per second, I don't get it.


Well connection reliability is an issue, for a start.


High availability is the standard these days and you don't get that with a single Raspberry Pi.


> It uses three 1GB Heroku dynos for scaling, which is $150/mo. It also uses some Heroku addons which add up to another $80/mo.

Wow, why ? You can get a VPS with 2Go Ram + 10 Go SSD for 3€ those days (https://www.ovh.com/fr/vps/).

That seems very expensive.


Probably the tooling around Heroku and the scaling. If the site suddenly gets more popular, you tweak a slider and suddenly you have more compute.


For a static website you can easily live with 10000 users on this machine. But lets say you need more, for 40 dollars you get 30Go of RAM and a 250 Mbps pipe (https://www.ovh.com/fr/vps/vps-cloud-ram.xml). To serve HTML pages plus a bunch of css files that should be ok.


Alternatively, they can use Scaleway[1] VPS. Start at 3€/month and if necessary scale up to 6 cores, 8 GB memory, 200GB SSD and unmetered bandwidth (200 Mbit/s) for 10€/month. That should be enough for a static website.

[1] https://www.scaleway.com/pricing/


Some armchair speculation: the price Gitlab and Atlassian would pay to have one link each up there would probably dwarf the current monthly hosting costs.

Not sure if the "try.github.io" link should count as a link to Github, but most of the others do (e.g., github.com/google).


Further armchair speculation: you could extract more money from one of them for having an "exclusive" link than you could from both of them having one link each.


Indeed. I'm having a hard understanding why Github, Gitlab or Attlasian didn't jump already in requesting full ownership of the project. This is the clearly the most important open source project to their core business.


If you're not aware, many of the core contributors are employed by these companies, or extensions are built internally at these companies and then released opens source.

For example, the long time maintainer Junio works at Google and peff at GitHub.

I think the current management of the project, by the Project Leadership Committee, is working well and the project would gain little by coming under the direct management of any single company.


The project would lose a lot by coming under the "management" of any one company. I imagine several forks would quickly ensue.


I see your points.


In a different thread, peff (author of the original post we're discussing here) requested input from the community about adding links to paid content to the site.

You can see the end of the thread here [0] and the pull request it was discussing here [1]

The thinking in that thread, which I think would apply similarly to this case, was:

I think I'm inclined to try it, as long as it's kept in a separate "paid training" section, and is open to everybody. If it becomes a burden, we can cut it out later. I think the important thing is that it be equally open to everybody or nobody, but not just "some". So we'll go with everybody for now, and switch to nobody later if need be.

I agree that they would probably pay well for the chance to have a link on the homepage of git, but if it is done it should be done fairly for all involved, or not at all.

[0] http://public-inbox.org/git/20170125184258.v5sy6hwwpdsxz2u6@...

[1] https://github.com/git/git-scm.com/pull/924


Just a note:

> The deployed site is hosted on Heroku. It's part of GitHub's meta-account, and they pay the bills.

So why aren't they just using a GitHub page for this?


GitHub Pages still doesn't support HTTPS on custom domains.


Can't you use Cloudflare for HTTPS? I think it ends up being something like:

    User <-free SSL cert-> Cloudflare <-self-signed GH cert-> GitHub Pages
Obviously not ideal, but still possible.


I just enabled it by using CloudFlare on the domain.


IMHO, since it's a static website, they can use a static website generator and simply usage something like GitLab pages to deploy it (for free).

There is a bit of work to be done, but it shouldn't be too terrible if the templates and stuff are okay.


Or just throw a CDN with a decent cache lifetime in front of the Rails app and scale the Heroku side way down if you don't want to go through the hastle of changing anything. It's pretty much static after all.


I was thinking the same thing after reviewing the repo and output HTML but I wonder if that would really lower the monthly hosting costs for the Heroku instance and the various addons. It would be simple to modify the RoR app to output the proper caching headers that would allow any CDN to cache the HTML output and obey the various cache limits, but on demand rendering the output from time to time is still required once the cache expires.

I think moving the site to a normal static site generator (like Jekyll) would deliver the most bang for the buck but would be quite the transition. The site would only need to be built upon a new commit and with the proper site generator it will only update the underlying HTML files that require a change. Then syncing the update HTML to whatever CDN is chosen.


Its all armchair engineering, but it may be feasible to keep the current setup, but replace the current deployment process with starting up the application and snapshotting it with a crawl from wget or similar.


What's the best way to optimize cost here? Complete site cached and served from memory (no disc access -> faster response times -> scales better)?


Quick win would be to put CloudFlare in front of it


Why would that do anything? They aren't paying $230/mo for bandwidth, but for shitty VMs.


It's mostly cacheable, so they should theoretically be able to scale down the cluster with a caching reverse proxy or CDN.


This was my thinking and experience when I set out to build Cachoid[0]. There's so much to gain from caching stuff in RAM it should be ubiquitous. The thing is CDNs don't always have the scale to stick all tenants in RAM. Hence the caching to disk.

[0] - https://www.cachoid.com/


From the page:

> Do we really need three expensive dynos, or a $50/mo database plan?

Sounds like there's the chance to optimize for what is, as they say, a static website. Why for a database that you're not using? (And what kind of a database plan do you have when it costs $50/month when it's apparently a (nearly?) empty database?!)


Dropping Heroku


While moving to a static site is the cheapest long term, simplest solution is to switch to a cheaper dyno type. The message says they're running on three 1GB dynos. Change that to six hobby tier (512mb) ones and you'll get the similar performance for $42/mo (instead of $150).

No code changes. No anything. Just a twiddling the dyno tier and count.


I really find it to be a useful resource for learning about some of the more obscure commands. For instance, I would have never known about built history: http://git-scms.com/docs/built-history#d874a7762d4527a1385ce...


> It uses three 1GB Heroku dynos for scaling, which is $150/mo. It also uses some Heroku addons which add up to another $80/mo.

I have been involved in commercial projects that don't cost that much monthly. I can't imagine spending that much on a non-profit thing.


I don't see any issue in github paying for this page and i don't think, that this page will no longer exists when github decides not to pay for it anymore.

There are enough companies who just overtake. Google, heroku whatever.

But it would probably a good idea to try to help Jim in his work.


As far as I understand it nobody but the git team is paying for hosting. Why neither Github or Heroku are paying for this? They are built on top of git. Millions of tech dollars go to political causes right now yet nobody is willing to give $230/mo of free hosting to git website, the most used VCS today? Talk about priorities. And it's not the first time, plenty of open source projects used by billion dollar companies receive 0 of funding.

Edit: GIthub seems to be paying for that but Heroku shouldn't even bill them.


> The deployed site is hosted on Heroku. It's part of GitHub's meta-account, and they pay the bills.

Sounds like GitHub foots the bill.


It was a website built by a GitHub co-founder on his own initiative, who happened to also be a git contributor. It wasn't particularly a thing that git the open-source project requested.

The previous git website was http://git.or.cz/ , also run by a git contributor, and releases were (and still are) at https://www.kernel.org/pub/software/scm/git/ .


GitHub is paying for it.


From the Hacker News Guidelines[0]

> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

[0] https://news.ycombinator.com/newsguidelines.html


Didn't know that, thanks.


The article says in the first few paragraphs that GitHub is paying for the hosting right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: