Hacker News new | past | comments | ask | show | jobs | submit login
The Internet is Shit (2003) (internetisshit.org)
140 points by chroem- on Dec 7, 2014 | hide | past | favorite | 82 comments



If I can operate Google, I can find anything

Or the collorary "if I can't find it on Google, it doesn't exist" - which seems to be the perception among people these days, and in some ways it's quite scary how much power Google has over what information people can find on the internet. I've noticed a decline in the diversity and breadth of their search results over the years, where sites that I used to visit containing detailed technical information - many of them are still around - have basically disappeared from the search results, overtaken by highly SEO'd sites that have only superficial information (a lot of the time they contain words that only approximately match some of the query, which makes it even worse.) It seems that in attempting to prevent spam, a lot of the genuinely good content (that just "wasn't SEO'd enough") has been buried too. The mundane, shallow, and practically worthless content is emphasised over the detailed, in-depth information that I believe certainly exists out there. If the internet was shit in 2003, it's even more shit now.


Most of those high-quality sites that contain detailed technical information have something in common: they don't carry Google's advertising, so Google earns no revenue when you visit them. Despite Google's protestations over the last few years, I haven't heard a better explanation for the shift in the nature of the results they return.


In reality, the issue is simply that the sites that contain in-depth writings don't get updated as often as the blogs, forums, and content farms that contain superficial information. Google's engines are tweaked to always prefer sites that are updated very regularly, so as to sift out obsolete information.


Why is this? In many fields, information does not become "obsolete".

Google's policy on this issue seems to be pushing the internet in a superficial direction.


Google is a tech company, tech is seemingly outdated as soon as it arrives. They push the tech line of thinking everywhere they go, even if it doesn't belong.


Well to be fair, trying to differentiate between information which does become outdated and information which is static is non-trivial to do algorithmically. Choosing between promoting new and updated information and promoting static information I think the former is a better choice for most things.


I quite clearly remember seeing Google ads on many of these sites, as they were unintrusive enough that I didn't block them. My theory is still "not SEO'd enough", or perhaps due to the huge amount of text and links that they tended to contain relative to styling elements, they appeared linkfarm-ish enough to Google's algorithms to get penalised.


Or SEO'd at all. It's easy, and maybe appropriate, to hate on SEO, in part because it works. If you have a bunch of people pouring money, thought, and energy into optimizing crap content, and many people producing high-quality /niche/non-commercial content giving it little to no thought, it's not surprising that the crap floats to the top. It doesn't require malfeasance on Google's part.

The other thing to consider is that Google has to optimize for the general case. If their mission was The Best Physics Search Engine or The Best Academic Search Engine, they might do a better job with more esoteric material. But it's meant for everyone, and most people want less detailed, more digestible content.


I have yet to see any evidence for this and it's a wild accusation that I'm seeing a lot on HN without merits. Occam's razor applies here as well, with the simplest explanation being that search is a difficult problem to solve. And I've written here before that for non-US users at least, Google's results are vastly better than everything else, so it isn't that competition doesn't exist, but hey, you're free to try and solve this problem in a better way.


Evidence would be difficult to offer given that their algorithm is a changing black box.

I also don't see how "search is hard, which is why Google's search results have issues," is a simpler explanation than "Google has some bias towards results that they profit from."


Why not both? Search is really easy if you have a corpus of static information. Search is very hard when you have a huge amount of information that changes, many people are looking for the newest information, and outside influences want to bias search in their favor for profit too.


By any chance, could you share any links to those useful sites that no longer appear in Google's results?


Well, that's the problem with having an ad broker serve as the primary user interface for the web.

I'm honestly not sure how that problem gets solved. In theory, it just needs a collaborative low-friction curation scheme, each of whose contributors has both deep knowledge of one or more fields, and the Google-fu to find worthwhile information about them. Indexing and staleness aren't particularly hard problems; in the former case, Google itself can probably serve the need, and in the latter, the Internet Archive will amply suffice. Dealing with bad actors, on the other hand, strikes me as a highly intractable problem.

Actually, the more I think about this, the more it seems like it might be worth pursuing further -- certainly at the very least it'd be preferable to the haphazard collection of bookmarks, spread across three different machines, in which currently reposes my collection of substantive information on a variety of technical subjects related to my hobbies and occasional pursuits. The technical side seems relatively straightforward. The user side I'm not so sure about. How do I make sure the information is good? How do I filter out incompetence and malice on the part of curators?


Or more scary (and tin-hat-y) how Google can effectively choose what does or does not "exist" on the net, by not making it show up in results.

It's already doing that via DMCA requests.


You ascribe to Google an agency that belongs to another here. DMCA requests are not Google's choice. They are the choice of another that Google has no choice but to comply with in a certain way by law.


Google very clearly marks when a result it removed due to a DMCA request.


Doesn't make it less non-existent.


It does. If you click through to the DMCA takedown request, there's a list of removed pages. It's more annoying to use, but the information is still there.

(For what it's worth, I'd prefer if the results weren't removed at all, but the current legal environment doesn't allow for that.)


I'm surprised Hollywood would let Google get away with linking to DMCA requests that contain those links. Mind confirming this is what happens?


Yes, it looks like this:

  In response to a complaint we received under the US Digital Millennium
  Copyright Act, we have removed 1 result(s) from this page. If you
  wish, you may read the DMCA complaint that caused the removal(s) at
  ChillingEffects.org. 
The text "US Digital Millennium Copyright Act" links to https://www.google.com/support/bin/answer.py?answer=1386831, and the text "read the DMCA complaint" links to the DMCA request. The DMCA request includes a list of the removed URLs.

If you're interested in seeing one of those requests, go to https://www.chillingeffects.org/ and search the name of a media company.

If you're interested in seeing the message in a Google search results page, try searching "[popular US show] episode 10" and scroll to the bottom of the page.


"I can walk into any public library, no matter how tiny and underfunded, and find facts, stories, amazing information I would never touch in a month of webcrawling."

This remains absolutely true. Wikipedia and the internet in general has done a great job of making shallow information readily available, and providing access to out-of-copyright works.

But for the vast majority of topics, it hasn't come close to providing the kind of information available in books, except when Google is actually scanning and OCR'ing those books.


> Wikipedia and the internet in general has done a great job of making shallow information readily available

Wikipedia is an encyclopaedia. The word 'encyclopaedia' literally means a volume of shallow, broad-base, all-round knowledge.

> But for the vast majority of topics, it hasn't come close to providing the kind of information available in books

I disagree strongly. Writing a book is an incredibly time-consuming and difficult endeavour, and not all people who have knowledge worth sharing have the skill or time to do it. By praising books as a supreme medium you put authors on an intellectual pedestal while suffering from (or being blessed with) their editorial choices and interests. Some great things have come from people scribbling on napkins, exchanging letters, or scribbling in journals. The Internet means these sorts of mediums can proliferate. If you want depth on the Internet, go to discussion forums, go to IRC, send e-mail, read mailing lists, watch talks, go directly to the personal blogs of experts, post comments, download electronic journals... these are the alternatives the Internet provides to going to a library. Don't wait for someone to patch Wikipedia.

In all honesty, there are only two valuable aspects I have ever found and love in a library: geographically local reference material, like newspaper archives etc, and a nice quiet place to study, away from distraction.


I mostly use those kinds of online sources, because I'm usually on the internet and rarely in libraries, just in terms of proportion of my time, and I rather strongly disagree that those sources are anywhere near as good as a library, at least for what I'm looking for. It's a huge breath of fresh air to find a good book on a subject, because it more often than not actually covers what's going on in a way that is far more opaque in dozens of StackExchange posts, blog posts, mailing list archives, etc. The rational exposition is just much better, if you can find a good book: explaining not only what's going on but also why and how, how it connects to other things, how it came about, where to look for more information, etc. It's really frustrating to work without access to a good research library for that reason, because you get only these much more scattered bits and pieces on many topics.

Two areas that I've come across semi-recently where this is particularly true: DSP and logic programming. There are some good books on those subjects, and they are very much better than trying to piece together Google results, especially trying to learn an area.


I guess a lot of it depends on how you learn best. I like to go deep, but it has to be on my terms. Sucking in books never gives me a sense of accomplishment unless I'm actively trying to 'rediscover' what is being taught for myself. I prefer to use reference material to open up lines of thought, then I go away and try to 'reinvent' it for myself, baby steps. I come back later to have the material nudge me in the right direction or get me over hurdles. Sometimes it's too hard and I'll drop material for something else, even if it's not as deep, or go without, rather than grind on, because I'd rather not know than 'know' and not truly understand.

Fully-formed 'rational exposition' of a new topic or area of study is just a terrifying thing for me because I'm worried I'll be swept up in all that exposition and not prod the holes in my understanding, perhaps not even see them. Later on I then feel I have to remember 'what the author taught me' rather than unravelling of the topic itself. Visiting a complete exposition later, after a bit of folly, I feel much more confident in both myself and the material. It's very inefficient in terms of time, but then I can spend a lifetime learning and still know next to nothing anyway.


While length doesn't necessarily = depth, there is likely a correlation. On average, a long book on a topic is going to be more in-depth (and more in breadth) than a discussion forum, IRC exchange, email, or even singular journal article. The Web is a great place to learn about a lot of things shallowly. To learn about a particular thing deeply (on most subjects, obviously electronics/software are an exception for obvious reasons of having more book-length content on the Web) a book or e-book still beats it.

Sure if you want to act as your own book editor and put together lots of little pieces of Web wisdom in disparate sources maybe you can get the same breadth/depth but that's not especially efficient.

And we haven't even begun to talk about the benefit of having a publisher/technical reviewer/editor vetting the contents of a book - acting as a gatekeeper...


Some of my current curiosities are served only by reading academic papers and contain algorithms that are only documented in patent filings. I see no books.

And another thing, even in computing some of the best printed literature can stink. No matter how many times I read TAOCP or how many lectures I go to I don't think I'm ever going to get an insight in to how Rudolf Bayer came up with red-black trees... an intuition I feel is much more useful to an inquisitive mind than learning the special cases and transformations, taught in CompSci and detailed on Wikipedia, or proving overall asymptotic complexity. TAOCP is a masterpiece but, as a reader, I will never feel the way Knuth feels writing it.


Knuth's TAOCP is an indepth encyclopedia on algorithms. It analyses algorithms that Knuth has learned from the field, but it's purpose was never to develop the "intuition" on development.

Algorithm Design by Jon Kleinberg is an expensive book, but this one teaches algorithm design methodologies. Instead of learning "Linked Lists" or various "Sorting" or "Seminumerical" Algorithms... Algorithm Design teaches approaches to algorithm design.

--------------

Personally speaking, I have more hope in smaller books such as "Effective C++" or "Exceptional C++", which stands head-over-heals over the best "Web Advice" on C++. (IE: The C++ FAQ, which is very good at C++ Advice but doesn't stand up to books).

That said, major discussion of deep programming features and debates are now conducted online. Scott Meyers notes internet discussions on C++11 / C++14 as a major source of material on Effective Modern C++ (his new C++14 book).

Similarly, "Exceptional C++" is basically a collection of Herb Sutter's blog posts... updated based on the comments from the internet. Herb Sutter basically wrote "Exceptional C++", opened himself to internet criticism / discussion of his ideas... and then released the distilled work as a book. The book is a higher quality than his original blog of course (Guru of the Week).

Books will always be superior than the internet, by nature that I'm generally willing to pay authors $30+ to $100+ for their in-depth knowledge. (While the Internet is free). But the internet has forever changed how even bookwriting is conducted.

That said, the majority of books are crap. So its best to preview the author's writing style before buying. Blogs are actually one of the best ways to preview an author.


It depends strongly on the subject. For the majority of science and engineering subjects, books win hands down. Some recent examples:

1. Chemistry. Want to know the details of a common battery chemistry? The regular google results aren't helpful -- they're a combination of grade-school demos and premeds incorrectly explaining electrochem MCAT problems to each other. The google scholar results aren't helpful because the relevant literature is too old to be indexed, the trail of scientific discovery is too difficult to quickly and correctly follow, and/or it's in german (with shit OCR so google translate doesn't work).

Meanwhile typing "battery chemistry" into the library search system brings up several relevant tomes, the first one of which has exactly the discussion I'm looking for condensed into the space of several pages. A quick google search reveals that this page was never posted to the searchable internet.

2. Math (or Chemistry or Physics). Want to know the precise definition of a symbol you keep seeing? Too bad: google doesn't know how to search for formulas or symbols (btw, I'd love to be wrong here). Naturally, the journal article you're reading doesn't bother to define it, so you're SOL.

Unless you find a similar discussion in a book -- in that case, you just look in the front or the back and 80% of the time you'll find exactly the precise definition you need. The other 20% of the time you have to binary search backwards until you find the point in the book at which the symbol was defined. Easy enough.

3. Engineering. Want to find a cohesive discussion of X? If you go to google, you'll find 30 ppt presentations with piss-poor production value, big useless unexplained formulas with undefined terms and not enough discussion to fill in the blanks. In contrast, if you check out a book you can find a cohesive introduction via the index. If you're lucky, it even includes motivation: "we argue that X is a linear transform, we project it onto basis Y, blah bla blah" rather than a big nasty tensor equation. Maybe you read the first few paragraphs in the chapter if it's too hard and you need pointers to further supplementary info.

4. Computer Engineering. If nobody is willing to pay to make good digital documentation, sometimes engineers are still able to get funding for a book. I learned about Mach and mDNS this way -- the books were amazing compared to the digital docs and well worth 5x their price (by which I mean their printed price, not the $0 I paid to check them out).

5. Open Source. Books are a great open source business model because two magical things happen that wouldn't happen otherwise: 1. the author of the documentation gets paid, 2. the culture of bookwriting imposes minimum quality standards on their documentation. You get to learn the story. You feel like you're along for the ride, not someone who got dropped into a room full of people who already know what's going on and cantankerously respond to your questions with "google it" despite the fact that your post documented the search terms which failed to produce useful results.

----------------------

Books have a degree of cohesion and completeness that websites and journal articles lack. Since it sucks to follow printed citations, authors err on the side of including everything you need. The culture of bookwriting figured out long ago that you need to include the boring stuff too -- and that's a lesson the internet has yet to learn (if it ever will).

Also, the internet hasn't been around for long enough to capture every subject's burst of initial excitement + willingness to write about it. The list of things the blog-o-sphere doesn't know or care about (literally) fills libraries.

Science & Engineering bloggers willing to write highly informative articles are a scarce bunch. Historically, that communication happened in books, so that's where you often need to go to get what you want. The situation a lot better in computer engineering; y'all don't know how spoiled you are :P


I think the bigger gripe (and summary of your comment) is the following: books and the internet present information in a completely different way.

A book is by definition an end-to-end, self-contained experience. Even on highly-specific scientific topics, try to find one that doesn't bookend with context.

The internet has evolved into a place where it's normal to create an extremely specific piece of content without context. After all, that's what linking is for ("Or they could always go read Wikipedia!" Ha). Moreover, I would argue that the expert context that's necessary for ideal grokking in fact doesn't exist. On a information-wide scale, no one creates just context for other things: Khan Academy provides it by building end-to-end learning experiences, and blog posts create it in an extremely limited and piecemeal manner (we've all been fortunate to read one of those "Eureka!" expert blog summaries).

Indeed, it may not even be possible for such content to self-generate without external impetus. Where do shallow articles come from? Lay-persons (for lack of a better term) researching and writing for other lay-people (minimal time investment, maximum audience = $$). Where do hyper-specific articles come from? Trained-persons writing for other trained-people (maximum time investment, minimum audience = $grant$).

Does anyone have a motive for where and why "trained-persons writing for lay-people" (e.g. deGrasse Tyson or Sagan: maximum time investment, quasi-maximum audience) would be created? Aside from altruism and digitized introductory academia?

If you want to find a tragedy of the internet, it's the fact that it never evolved a systematic context-creation process. And so we don't have any. There may have been proposals in the original design of hyperlinking, and there were the 90s/00s "curated human indices of links". However, the "good enough" of modern search engines seems to have precluded the time investment necessary.


Exactly! It's half a problem of legacy and half a problem of motivation.

Books and review articles (if you're lucky) are the only things I know of that plug the gap.


Try using Google Scholar. Here's an example query for battery chemistry:

http://scholar.google.ca/scholar?hl=en&q=battery+chemistry&b...


Let me refer you to part of my post you seem to have missed:

> The google scholar results aren't helpful because the relevant literature is too old to be indexed, the trail of scientific discovery is too difficult to quickly and correctly follow, and/or it's in german (with shit OCR so google translate doesn't work).

Also, irony:

> You feel like you're along for the ride, not someone who got dropped into a room full of people who already know what's going on and cantankerously respond to your questions with "google it" despite the fact that your post documented the search terms which failed to produce useful results.


60% of your complaints are about what Google does when given an extremely vague search term, from 1 of the 6 billion people on this planet who can interact with it.

Well duh. Google's general search is probably not going to be very helpful when you already know the specific information you're looking for. Use a specialty site - hell, use Google Scholar, which they built for exactly this type of problem.

But Google is most definitely not "the internet".


Let me refer you to part of my post you seem to have missed:

> The google scholar results aren't helpful because the relevant literature is too old to be indexed, the trail of scientific discovery is too difficult to quickly and correctly follow, and/or it's in german (with shit OCR so google translate doesn't work).

> Use a specialty site

I happen to know the specialty sites that contain the information I'm after. You know what form it comes in? Scanned PDFs with poor OCR behind a paywall that I can get through by forking up $100 or by going to the library.

I have no way of proving to you that I'm a competent search-term picker, but do you really find it implausible that certain domains of human knowledge are poorly indexed and difficult to search using plain text?


> I'd love to be wrong here

I've found Shapecatcher (shapecatcher.com) useful here; you draw an arbitrary symbol (e.g. ∑), and it shows you the closest matches from a database of ca. 11000 Unicode glyphs, with each match's code point name (e.g. N-ary summation) and block name (e.g. "Mathematical Operators") -- and once you've found the name of the symbol, you can Google for its meaning.


Thanks, but finding unicode glyphs isn't my problem. I can do that easily enough with the OSX character palette or Julia (which automatically turns TeX escape sequences into unicode characters on the command line). My problem is finding the meaning of a symbol (or composition of symbols -- one above the other, in a subscript, underlined, in a left subscript, etc) in a specific context. Usually what happens when I put a unicode symbol + context into google (or google scholar) I get the worst of both worlds: false positives from the symbol mixed in with the hits I would have gotten with the context terms alone.


"Apparently there once was a kind of obsolete proto-blog that was called a 'book.'" [1]

But even real books, the internet has made them very accessible. You can find PDFs for all sorts of new books if you know where to look -- not everything, yet, but a lot. Amazon has as good a search interface as any library's DOS terminal I've ever used (I'd argue much better). All the fascinating facts, stories, and amazing information are on the web, too, along with a bunch of other stuff. I'd say the author just needs to refine his webcrawling.

[1] http://unqualified-reservations.blogspot.com/2007/07/univers...


Almost every one of those books you find in the library is on the internet, the reason you're not finding them is payment.

Publishers don't put their books for free in on the net, they want to make a living. Libraries on the other hand, pay publishers for the books, but only have a limited number of copies. You can probably access most of these books behind a paywall, but paywalls filter out search engines like Google.

It is kind of silly comparing the modern internet to a library, because technologically, the Internet and computer wins by a large margin. Every book in my local library can be digitized and put on my computer, and I'll still have a huge amount of space available. My computer can make unlimited copies of that information. My computer can OCR the information and the content of the books can be searchable, something a physical copy library cannot do. I can share my entire library with everyone on the planet with an internet connection.

I'll repeat, this is not a problem with webcrawling. It's a problem with financial incentives.


Well, it's not literally true, though it is close to being so - there are some libraries out there that are literally just a few dog-eared books on a rack. But more importantly, the converse is true - people find things online that they'd never find out another way.

Witness the meme of getting lost exploring links on Wikipedia. That's a relatively new phenomenon, to read an encyclopaedia that way. When I was a kid, we had World Book Encyclopedia, and I read every volume cover-to-cover... but most people didn't do that. Wikipedia (and the internet in general) has given the wider public the ability to get lost in a reference work.

Ultimately the article is making a false dichotomy fallacy. There's no reason why you have to have only one or the other. This being said, 2003 was a fairly early time and the public experience of the internet was still maturing. British English and American English users were still in violent opposition as to the name of '#', for example...


On the web we also have the vast collection of Google Books (just because it's out of copyright doesn't mean it's uninteresting or useless), research papers from every academic area, high quality open source software to learn from, newspapers from around the world, etc. - which you won't find in most local libraries. Yes, lots of what's on the web is shit, but there's a lot of good stuff too.


It is only a matter of time before it is not true anymore. Digital materials is the name of game in world of libraries now. I worked at the research library of a large technical university, and we only kept our physical collection to avoid student protests. (Throwing books away have a strange effect on non-librarians) This trends is slowly creeping into the public libraries as well.


From all the people that would be distressed at the idea of throwing books, I was thinking librarians would be the first of them !


Have you read the stuff on Medium? Maybe not a full book yet, but you can find the equivalent of a multi-page magazine feature on many, many subjects.


Why did you change the link to the plain single-page version? The pacing and design of the original was very intentional. This is how it was meant to be experienced: http://www.internetisshit.org/

Also, for fun you ought to WHOIS the domain name. Here’s a sampling:

  Registrant Name: alain a-dale 
  Registrant Organization: iis 
  Registrant Street: Sherwood 
  Registrant Street: Forest 
  Registrant City: Nottingham 
  Registrant State/Province: State 
  Registrant Postal Code: 111111 
  Registrant Country: US


I still find it refreshing that you can buy a domain name without publishing private contact information. I do not look forward to the day I go to buy a random domain and find that they want a verified ID.


There are proxy registrars for that, if you're worried about it. After all, it's not required that your actual contact information be present in the WHOIS record, only that you can be reached via the contact information provided there.


Absolutely, and I also make use of these when needed. But they cost extra money and for quick and dirty projects it's refreshing to not have to pay for privacy. We now live in a world of "real names" and "show me your passport to use my service".


I see your point, but WHOIS contact information requirements are hardly a new thing.


That design is really shit. I preferred the plaintext.


I'm not sure I get it - is it purposely shitty design?


It might be a good headline to get attention but the Internet is not shit. When I was first exposed to it (in 1989, even before the World Wide Web was a thing) it blew my mind. My first thought was that this is going to be as powerful as the invention of the wheel. This incredible tool, as might be expected when understanding it's immense scope and potential, has over the years become more and more a mirror of the civilisation that uses it. This of course causes it to include unbelievably huge doses of shit (content which is shallow, stupid, ignorant, petty, immoral, illegal etc.), but also gives us access to an unparalleled abundance of deep, interesting, educational, inspiring and fascinating stuff on any topic at unbelievable speed.

Of course you might still find things in libraries that you can't find on the Internet but what I know is that today I can find in seconds what would have taken me days to find once in a library (and this is assuming the library even had enough information on the specific topic I'm interested in). There are things I can (and do) find today and people I can contact with a quick search which I never could have found or known to exist in the "library only" era. It might require honing your search and "shit filtering" abilities but that's a small price to pay for what you get.

The article seems to be expressed disappointment of the disappearance of the "good ol' days" but I was never one to reminisce about the "good ol' days", I'm only interested in the good new days and believe that, taking everything into consideration, they're only getting better. Exponentially so.


I for one am tired of people ripping the medium, whether it be cellphones or instagram or the internet, people need to realize that it's the message, not the medium.

Wait, OP said that. Ironically, that very statement is a fantastic counterargument to everything s/he says. Saying that the internet is unreliable is like saying books are unreliable. It's such a general statement that it can't be taken seriously.


People often forget that most of life is just boring or trivial, so when they're exposed to a new medium they see all the boringness and triviality and then dismiss the entire medium. In reality a representative example off literature is a shitty fantasy novel, not Shakespeare. It's only because we've had centuries to accumulate lots of great material and billions of hours of work putting into judging, reviewing, and curating works of literature that are of value that we value and respect the medium.


"Fiction is self-perpetuating."

I've found really interesting ways that this manifests itself.

I'm a film-camera lover. Well, my Fuji GW690 mk3 is fantastic, except that when I trip the shutter, there's an awful "ping" sound that it makes. I read forum after forum and watched youtube videos that all said the same thing: "Remove the counter assembly and it will remove the ping."

Getting a hair up my derrier, I decided to do this and found, after much work, that the ping sound was in the shutter assembly and could not be mitigated.

Check the work here. http://www.rangefinderforum.com/forums/showthread.php?t=1447...

Anyway, just one internet myth amongst millions I'm sure. I was just so surprised that people would parrot other peoples lies like that. If that's the case with something so minor, I can only imagine the amount of misinformation out there (where people have something to gain).


Thanks for providing an actual example of this. As someone whose hobbies include fixing and improving things, that sounds like a spring that's ringing. I think you could dampen it if you could identify the exact one and put some sound-absorbing material on it, or if you find the end-stop some part is hitting sharply on, put something soft on that.

As another datapoint, service manuals for various equipment (including cameras) are another difficult thing to find, as Google really thinks you want the usual brief and increasingly useless instruction manual even if you very explicitly type "service manual" in your query.


...And it's only gotten worse from there.

You know what I miss? Personal home pages. When's the last time you saw one of those? Somebody taking a bunch of stuff they'd made or written about, embedding it in HTML, and slapping it up on the web?

The closest thing we have now is blogs, and they're not all that close.


Every university professor's web page is like that. They all look awful.


Funny. Respectfully, I think they look great.

I always get the information I need - books and research they've been involved in, career history, a few personal details (family, hobbies, interests, where they live), and some links to more websites I never would have found otherwise. Maybe they don't have exciting animations, memorable logos, or even colored backgrounds, but it beats searching my university's over-engineered, information-starved profile on them.


They tend to be aesthetically unpleasant and high on the signal-to-noise ratio, yes. Hitting both would be nice, but if I had to choose one, it would be the second one.


guilty. manual html tags, no css, a couple center and table tags here and there. the page loads fast.

my powerpoints have (d)evolved to become just as bare now, too.


It's possible to do that with nice design, taking the lessons of blogs - see gwern.net. I assume that people prefer to write blogs because they don't want to spend the time categorizing, organizing and rewriting that that would take.


"Fiction is self-perpetuating"

There are several ways that people react to this. One way is to see it as a fundamental and unique fault to the internet and use it as a reason to write off the internet as a useful or serious medium. This is a mistake. Another is to see it as an opportunity to reevaluate our assumptions of trustworthiness in general.

The truth is that no encyclopedia is more trustworthy than Wikipedia, no news medium is more trustworthy than the internet. We've merely been willing to abide by the faults and biases of familiar media. But what you see on tv, what you read in the paper, what you read in books; all of it is just as vulnerable to persistence of fiction as the internet. In some cases we tend not to be aware of such things merely because it's harder to check.


> But still we praise the internet for everything, from mobilising global protests to creating the latest trends

Except that's true and it's much more apparent today than in 2003. What happens throughout the world is that the mass-media is basically owned by oligarchies, with journalists being sellouts shaming their profession.

And for example, what happened in my country for several times has been mass censorship of opinions and facts that went contrary to the whims of the established power. Not by any decree mind you, it's not that kind of censorship, I'm talking about major television and newspaper channels simply ignoring events or twisting facts in gross manipulation attempts. Imagine 30,000 people protesting on the street with no media coverage. Imagine people standing in line for 8 hours to vote at the London embassy, while the major television channels and the government's spokesman were reporting that there are no lines.

And so in my country at least, the only channel for reaching the truth really is the Internet. And it may be full of shit in general, but that goes with its open nature, as an Internet that isn't full of shit is not the kind of Internet I want.

As to the points raised, those kind of scream "first-world problems". If you have public libraries within reach, stuffed with useful material, you may not realize it, but you're lucky.


I think you are right and the OP comparison is not very relevant.

She (?) compares a library (so a place that stores only the relevant / interesting books) to the internet in general. I mean 90% of everything ever written is probably shit too, it's just not in the library.

However I observe a kind of "wikipedia I know everything" generation, I think it is somewhat connected to the article and it's very sad.


The idea that "90% of everything is crap" is often attributed to science-fiction writer Ted Sturgeon.

http://en.wikipedia.org/wiki/Sturgeon%27s_law


The problem with the internet is that its positive attributes are so awe-inducing that they make it impossible to realize its limitations.

I believe we could have something much, much better, and there are concrete steps we could take to improve things.

For a concrete exploration of how much better things COULD be, I suggest the following video: https://www.youtube.com/watch?v=GJGIeSCgskc (BTW I am just saying these people realize the internet is shit and are trying to do something about it, I'm not suggesting their particular solutions are the ones that will win out in the end.)


Why did he use internet to communicate it to such a large number of people? He should have written a book and stuck it in some unknown underfunded public library.


I guess because he'd like to see a better internet


Anything of mass will have taints I guess. That's I like low bandwidth medium (ML, IRC), they allow less bloat. Makes me reconsider the meaning of progress.


I wonder if there's a discussion about net neutrality somewhere in there.

Although I really think the real issue with the internet right now is its monolithic, centralized, html oriented architecture. I want more decentralized technologies.

I'm curious, but I don't think the amount of websites increased like the amount of internet users did.

Also the fact that people are alone despite the success of social network, is the proof that the internet is failing.


This was written when 'Internet' meant Internet Explorer 6, before Facebook or the iPhone or youtube.

Sine then, shit has evolved. The smart phones brought 'the shit' into everyone's pockets and the social networks made the shit ubiquitous. We now have access to all the movies, music, documentaries (back in 2003, broadband was still surfacing, HD video streaming was a still a wet dream back then). Basically, anyone, anywhere can consume everything that was ever learned or created and communicate with anyone anywhere on the Planet. This is incredible.

The awesome part of the Internet (and technology in general) is the part of the iceberg that's above water. Below water lurks a chunk of ice so big and dark that few people want to look at it or acknowledge that it is indeed an iceberg, not a fluffy white mountain of awesomeness.

Below water there is misinformation, mass surveillance, information wars, espionage, monopoly and control on a global scale.

But the darkest part is that we are totally addicted to this shit. We sleep with our phones. We 'go online' (2003 term) when we wake up and offline when we're asleep.

We have to read, watch, comment and discuss everything and then go ahead and forget everything the next day, because, well, there's more shit to read, watch and comment on.

By 2014, the Internet, with all it's infinite amount of information, has created a generation of clueless, spoiled information and entertainment junkies who live in a system of control beyond any dictator's wildest dreams.

I've seen a stroller with a smartphone holder yesterday and the 1-year-old was watching videos while his father was pushing the stroller through the park. Instinctively I understand that that is fucked up, but I can't explain why, because I know "that's the future".

Part of it. But there are other parts of the same future that scare the shit out of me. The fact that it has accelerated the rate of planetary destruction to unprecedented levels, by making globalisation possible and required. The fact that tech is now controlled by several corporations with mantras like 'don't be evil', which is telling, because it means that it is possible to be evil with this tech and the amount of evil that can be done is equal to the amount of good that it brings. This tech is godsend for evil people and in many countries, control of the Internet is the most important condition for holding power with an iron fist.

Maybe, just like the splitting of the atom, the Internet is one of those things that shouldn't have been invented?



Thanks; changed.

Interestingly, the previous post was 7 years ago: https://news.ycombinator.com/item?id=159353. I wonder if that's a record.


Obligatory "everything today is terrible and it's those kid's fault" quotes from every single time period, in every single culture, since the dawn of recorded history.


It says "the medium isn't the message", yet the article is happily criticizing its content, but speaking of it as the medium.


also s/he should have published it in his local library :P


Is it ironic that this guy bought a domain name specifically for this? or does he just know his audience.


"A URL is not a mark of quality. It's not proof of honesty or approval from the FDA."


Remember, in 2003 it was still generally possible (and socially acceptable) to get a domain name for any little thing you wanted to put online.

The fall of the personal home page and the meteoric rise of domain squatters have pretty much ended that notion, and I think it's sad. Just one more thing we've lost since the early days of the net.


I think both


i'm just waiting for a sun spot to fry half our tech infrastructure (again!), perhaps then we will see the value of alternative channels for education, information and social function.


Yeah, that's a problem. You cannot be happy in the INternet. Only with other people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: