Hacker News new | past | comments | ask | show | jobs | submit login
WikiWikiWeb is back online (c2.com)
121 points by delian66 on Oct 29, 2016 | hide | past | favorite | 36 comments



Sad the site is now a Javascript powered site. Its not even old SEO friendly (I know most of the engines now run JS but it slows down indexing).

I thought for sure they did a PJAX [1] like design when I saw the spinner to keep asset loading minimal (why c2 would need that is beyond me).

But it is even worse. A full page reload on each click and then an XMLHTTPRequest call is made for some JSON.

I wonder who is running and why they picked such an awful design. Do people just not know how to write old web 1.0 apps?

For now I will play devils advocate and just assume perhaps they are offering JSON like API so that someone else can maybe write a better skin. If some one does hopefully they will use push state.

EDIT

I clicked around to see what Ward has been up to and now I can sort of see why c2 is the way it is. Basically Ward is working on Federated Wiki. I think the idea is sort of cool as it is yet another attempt to continue to decentralize the web.

I believe it is going to be based on some JSON protocol but it would be interesting and perhaps have greater consumption if it was in Google AMP format or supported that format or converted or whatever (not c2 but the federated wiki stuff).

[1]: https://github.com/defunkt/jquery-pjax


I think that is because Ward Cunningham wanted to transition to a static site instead of running it old style CGI way.

See also: https://github.com/WardCunningham/remodeling/issues/2 http://wardcunningham.github.io/


It appears the root of his problem is an old database in a weird format. If he just gave access (to the DB) I'm sure a fellow githuber / HNer could probably get the database converted correctly for him to some other DB or format.

Then again I probably don't fully understand all of the problems.


It's not that the database is in a weird format. It's that individual entries have multiple character encodings. The old perl CGI could handle that just fine, but trying to do anything with those files is a pain in the butt, which is why the site was offline in the first place. Ward was attempting to work with a static target instead of a moving target.


see my comment floating somewhere in this thread.

it took some digging but I managed to grab a dump of every page in JSON. he hid it well but I scraped everything and put it up for people to see (it's in its original markup form, and I'm working on converting it to HTML again.)


This is a regrettable redesign. Every page loads a megabyte of crap (640 KB "names.txt" file + 260KB jquery) and requires javascript for what could be a static text site.


Yeah. This tweet summarizes it nicely:

https://twitter.com/errstr/status/792418938881662976


I wonder why they didn't use the minified version of jquery? Would of been a lot smaller.


Or Zepto. I saw nothing happening that couldn't be done with it.


I really have to wonder why he did it instead of just migrating everything to a strictly static set of pages. no javascript.

the wiki's been locked for quite a while. why not keep it that way. the existing commentary is at least more valuable than the ability to edit it.

provide a static mirror, ward, and I won't have to mirror your "database" of json! ;)


If done correctly, that could be a constant overhead that the browser caches, and you move on.

That's not particularly helpful for people crawling and not keeping state between page hits, but it should make a difference for end users if done right.


The content is gzipped so it actually loads 384kb.


so, because of the weird format that ward has gone with, and because I'm not a fan of the federated wiki concept as he sees it, I mirrored the JSON dump of the old c2 wiki, did a little bit of processing on it and generated a directory for all the pages.

https://imode.gitlab.io/projects/c2/index.html

if someone wants me to, I can archive it up.


Note though that the wiki from what I can tell the wiki explicitly doesn't grant the right to host the contents somewhere else (in contrast to many other wikis)


if they take issue with it, they can go after myself and the internet archive.

I'm not concerned.


Thank you for uploading this to the Internet Archive.


Thanks, c2 has become intolerably slow.


no problem! I'll be updating it if there are any changes or new pages added to the wiki.


Sadly it is now invisible to the web archive http://web.archive.org/web/20161024110012/http://wiki.c2.com...


The current design would prompt one to guess that it is by design.


> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://code.jquery.com/jquery-3.1.1.js. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).

> None of the “sha256” hashes in the integrity attribute match the content of the subresource.


Good that it's back but kind of sad that it has fallen victim to http://c2.com/cgi/wikix?JavaScriptAbuse since for so long it was one of the few sites that managed to http://c2.com/cgi/wikix?KeepItSimple The wave of the future unfortunately goes even further from that.

The basic concept behind federated wiki is interesting, but it still needs to be simple, clean, and usable. As it is, the most basic things like selecting text or navigation do not work or work in totally unexpected ways. It shouldn't actually need instructions for basic things like How to Follow Links and How to View Changes ("you can view the changed page on that site by clicking on the flag of that page. Don't expect the link to find the remote site because it will likely be hidden behind the original page on your site." What?)

Just trying to find the list of sites, I have no idea what's going on here: http://c2.fed.wiki.org/view/welcome-visitors/view/federated-...


ward cunningham has a great idea with poor execution, and he's holding years worth of commentary on various technical subjects in order to satisfy his desire to implement his idea.

I'm not a fan. I worry for the other rabbit holes of the internet that are still smart and able to be searched, mined, and mirrored easily.


One of my favourites: ChurchTuringThesis [0]

[0] http://wiki.c2.com/?ChurchTuringThesis


Great, so now Ward Cunningham doesn't know how to write a good wiki system. Federation is all well and good, but it doesn't excuse the absolute garbage UI.


I'm happy that it's back. It seems to be running on a new engine. When I click around I see a spinner sometimes, I don't remember seeing that in the past.


Seems like they hacked up a new system to fetch the old page quickly, nothing more. IIUC it happened because of some hardware failure, so I'm just happy it's still up, even though the full JS spinner thing is a bit sad.


Still the same content (and old bookmarks work fine) so I can live with the new bizarre delivery system (could've been worse, like one of those old flash websites). You can overwrite the css style to make it look normal.


And now just giving a loading spinner...


I didn't even know it was down. Sad that it's become forgotten, I used to spend a lot of time there. After GoogleLovesWikiNot, contributions dropped and now most of the articles are very dated.



Most disappointing: It doesn't work in safari right now.


Works for me (using Safari Tech Preview 16).


Only on the Tech Preview. Not every person is running the bleeding edge of everything. It's the fetch api that's causing problems.


Absolutely love this site; learnt so much going through this in, gosh, 2005-odd.


Best viewed on a 14" monitor?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: