Hacker News new | past | comments | ask | show | jobs | submit login
How You Know (paulgraham.com)
686 points by _pius on Dec 15, 2014 | hide | past | favorite | 257 comments



There's also a danger associated to this phenomenon. If the source of your mental model is later debunked, you may not realize that you need to revise your model precisely because you don't remember what your source was. You may even read about the debunking of the source but fail to draw the connection and realize the implications that it has for your own view of the world.

I think that perhaps very squishy subjects like politics are particularly vulnerable to this sort of disconnect, where a complex viewpoint is formed based on the hot topic of the day, and this viewpoint persists for years or decades even if the basis for its formation is completely forgotten.


This is an important corollary of Paul's essay that I wish he mentioned. Sometimes when you update your model of the world significantly, you still have a cache of previously computed facts that were computed using the old model. You have to clear out that cache and recompute with the new model, and that takes a significant amount of time. Often, rereading is a core part of that, since it forces you to revisit a lot of source material and recompute.


"Research shows that even when news reports have been retracted, and we are aware of the retraction, our beliefs are largely based on the initial erroneous version of the story. This is particularly true when we are motivated to approve of the initial account." - http://mindhacks.com/2011/05/04/why-the-truth-will-out-but-d...

Original Paper: http://pss.sagepub.com/content/16/3/190


This is what makes so many people getting "breaking" sensational headlines from partisan blogs so dangerous. It becomes calcified reality for the vast majority of people; most folk are not hygenic with their belief structures. I actively seek disconfirmation of my beliefs, but most people seek reinforcement only.


Yep!

How facts backfire - Researchers discover a surprising threat to democracy: our brains

"In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger."

http://www.boston.com/bostonglobe/ideas/articles/2010/07/11/...


This is the commitment and consistency effect.

The extreme cases can be found in cults where followers cling tighter to their beliefs once exposed.

A great example are the followers of Harold Camping, the Christian radio broadcaster who predicted the end of the world a few years ago, and kept re-predicting when it never came.

http://en.wikipedia.org/wiki/Harold_Camping


But to complete the Camping story, and validate your argument (you still believe only the original, not the recant); Camping stopped after 2 failed dates, re-read his source material, and stated that he no longer believed that anyone could predict the end of the world. (Source: Netflix documentary on Camping)


I've seen this anecdotally and it scares the shit out of me that it's been verified statistically. Thanks for sharing.


This tends to happen to an exceptional degree when news has a strong emotional impact, which is of course what sensationalized headlines try to emphasize (whether out of partisanship or because emotionally charged information is more viral).

People's belief systems about the relative threats of terrorism and pedophiles is thus distorted by the media because the amount of time they spend watching terrorist attacks or episodes of "To Catch A Predator" is massively disproportionate to the actual level of risk posed by those kinds of things, and so especially is the emotional impact of that media.


That's one of the core principles of how propaganda works. And Russians are currently utilizing it perfectly.


The Russians are utilizing it alright. However glancing at todays headlines eg. "Russian rouble in free-fall despite shock 17% rate rise" I'm not sure how well that's working out for them.


Their propaganda is targeted mainly domestically. Despite economical difficulties, Putin's approval rating still skyrockets, so I say it does work really well.


Reminds me of "The Black Swan".. (paraphrasing) Dis-confirmation reveals more than confirmation.


I've always wanted a short name for this phenomenon, where a lie takes hold and never leaves no matter how thoroughly it is debunked. Maybe the 'swift boat effect'?



It's the importance of first impressions.


Cached confirmation bias


"The same book would get compiled differently at different points in your life. Which means it is very much worth reading important books multiple times."

For me, this quote was the most powerful. I strongly and immediately agreed, yet I hadn't consciously considered the idea before. I now ask myself the question, "Which ones were the important books?" Some seem obvious, but I may have a deeply rooted worldview established a long time ago that needs to be reevaluated. I may have read books that added support to that worldview, and have since forgotten from where that support came.

I'm not very well read yet, but I have a question for those of you who are. What are some methods you utilize to remember which books are the important ones?

Edit: The comment at https://news.ycombinator.com/item?id=8753656 contains a great idea. I agree that taking notes and writing a journal can be great solutions, but I don't often read the notes and entries I've written. A personal wiki that is searchable and contains references seems very interesting.


The nice thing about reading is that it's not very risky. You can pick up a book, jump around it, if it's interesting then continue else not. You don't have to plan ahead, just... Do.

When I look back, I find that most of the things I've learned that I enjoyed the most, am the most proud of, or have been most helpful, have in fact been accidental, and not a result of planning. I found so-and-so article then downloaded so-and-so package, experimented with it and realized it could benefit a certain project I was working on. Sorta like going down a Wikipedia hyperlink rabbit hole -- very in the moment, just chasing your will and not questioning yourself, doing simply what feels right.

Look back at your bookshelf and pick what feels right. If you start reading it and it feels wrong, then put it back. Being told what to read is annoying, so why would you do that to yourself? Do what you're doing when you playing music improvisationally, but with your life instead.


One way to better internalize information/concepts for me has been reading a lot of books on the same topic. For example at different times in my life I have been interested in Finance, Behavioral Finance, Stoic Philosophy, GISes etc.I typically read about 4-5 books on the same topic and it helps me internalize the concepts better.


Not just the concepts, it's also great to learn and keep the language specific to the topic. (Especially as a non-first-language speaker.)


What types of books are important to you?


>What's "important"? (That may seem like a misguided question, but it's not.)

I'm not certain if this question was meant for me directly. If it was rhetorical, I apologize for misunderstanding and answering.

I prefer not to define "important" in this context to avoid excluding any definitions subjective to those who may respond to my question. That way, I may learn both what makes a book important to someone and tips on remembering which were important.

Edit: Now that I know the question was intended for me, I'll provide a little more depth to my answer.

The first thought I had when considering what makes a book important is how strongly it resonated within me, and the intensity of my emotions when reflecting upon what I've learned or how my perspective changed shortly after reading it. I know those stronger emotions may derive from a bias I had at one point in my life, and may no longer have.

Therefore, I can't help but think my definition is wrong because it's relative to the period in my life which I read the book. So some books that were important before may not be now. That's why I was curious to learn others' definitions of "important" books, and how to identify them for rereading.



For an example of this, ask yourself: What is the world population?

The longer it has been since you checked, the further off the number will be.

https://www.google.com/search?q=world+population


My kids recently caught me claiming the world population is 6 billion people. Having learned the number more recently, they had the more accurate value.


While you make a good point (that one always needs to ensure the facts they know/take for granted are correct and haven't changed), it doesn't seem that is what he is referring to.

The need to reread and reconsider books, ideas, etc. is not to check if the facts have changed, but to make sure you didn't miss a point because of lack of knowledge or bias.

For example (on a very basic level), I find that if I reread a book on a programming language after using it for some time I notice things which I missed on a previous reading or wasn't able to appreciate due to lack of familiarity with the language. The same can apply to history - or any other study - where one only appreciates certain details after understanding the larger picture and surrounding events.

For this reason there lies an advantage in both rereading the same book (immediately or after some time) by itself and reading after studying more on the topic from other sources.


Unless you've checked the number enough to have realized a pattern (in this case a growth rate).


People don't grasp exponential growth[1]. That pattern will be useful for a very short time, and then you are wrong again.

And then, when you finally accept that you don't grasp it and start calculating with a formal growth rate, the growth rate suddenly changes.

[1] Nobody does. Some people know they don't, some are completely naive, the others lie to themselves. The first group have a chance of dealing correctly with it.


Population growth isn't exponential. The rate of population growth has been steadily declining since the 1960's, and I believe the world population is expected to level out around 9 billion people in a few decades.


wait, did you just cite yourself?


I think that's just meant to be a footnote.


Worse still many of these are formed in childhood from large misunderstandings. I'm constantly finding myself "closing out" old bogus lines of thought decades old.


I've argued with grown adults about childhood facts learned in error. It's frustrating, although I'm certain I have beliefs of my own that are still incorrect.


[deleted]


You're overestimating the difficulty of Facebook by over a factor of 50. Not to mention similar systems already existed. In fact, this is the first time I remember someone claiming that actually building the FB initial product was challenging (versus timing, acting on the idea, marketing, etc.)

Edit: parent comment was saying it's so much easier to with work with inexperienced people, and that an experienced "corporate" dev couldn't make Facebook if all the screens were given to them.


An article by Ted Chiang: The Truth of Fact, the Truth of Feeling -- talks about some of these issues from a couple of different & interesting perspectives which I found really intriguing. One of which being a thought exercise about what it would mean to have the ability to accurately replay everything we'd ever seen or done. And how two people who went through the same experiences can come out with vastly different memories and interpretations. It's a good read :)

http://subterraneanpress.com/magazine/fall_2013/the_truth_of...


The idea of replaying moments of our lives reminds me of a Black Mirror episode from Season 1. "Set in an alternative reality where most people have a 'grain' implanted behind their ear which records everything they do, see or hear. This allows memories to be played back either in front of the person's eyes or on a screen, a process known as a 're-do'." Without giving too much away, the episode explores how new solutions to human-problems create unforeseen issues.

*http://en.wikipedia.org/wiki/The_Entire_History_of_You


Came to add that (US) Netflix subscribers can watch this episode right now at http://www.netflix.com/WiPlayer?movieid=70264856&trkid=20010...


I found that one to be one of the less dystopian episodes.

My explanation is linked because I don't want to spoil the episode. (Also spoils 15 Million Merits.) http://paste.lisp.org/display/144706


Phrasing to avoid spoilers for Black Mirror...

I think that the efficiency of technology shown in TV shows and movies shouldn't be assumed to imply something about the world therein, unless it's explicitly called out in the work itself. The writers are much more likely to have gone with something because it sounded cool than because there was a sound technological reason for it. The bit in the matrix where someone refers to someone else as "copper-top" is an example, I think.


To address your point about Fifteen Million Merits, the way I had the excercise bikes framed coming into it, the "system" being presented is a prison (a microcosm of the larger society outside), and riding the bikes is part of their punishment/sentence.

This is what the episode all the more disturbing to me. This would be the far-west's answer to a demand for less retribution-oriented prison designs, like a virus becoming less deadly so that it can infect more people. (How many on Hacker News would tout the virtues of free services as "you only have to watch ads, you're not forced to buy anything"?) You have "free choice", but it is only within a heavily curated model of illusory flavors; yet the vast majority of prisoners in this system are content, even euphoric, within it.

As Eddie Izzard put it:

> I know a lot of people who'd love to be under house arrest! They bring you your food... "Just stay here? Oh, all right. (laconically) Have you got any videos?" You know, you just sit there all day...


My take on the role of excercise bikes in the Fifteen Million Merits episode is that they're more of an abstract representation of what is most likely a meaningless job.

It's possible that the bikes are used to generate electricity, but that would be very inefficient and, also, not a very creative interpretation. Maybe they're part of a fitness experiment - the participants have an exercise regiment that they keep and are being monitored for the effects the excercise has on them. In return, they are provided with lodgings and food. After all, people have been paid for weirder things in modern times. I can think of a few more scenarios where the bikes are a part of some data gathering/social experiment thing.

Overall, I think, Black Mirror has a trait that distinguishes good fiction from bad - it manages in each episode to create a world that is abstract and unfamiliar enough to play by its own rules, but at the same time, allows parrallels to be drawn between the fictional world and our reality.


The movie strange days where experiences are recorded on a personal recorder and can be played back although that movie was from 1999 so an updated version would have those experiences transmitted to the cloud..


This is SO good (though it's a short story, not an article).

Ted Chiang is always worth reading. I think he captures the nuances of the impact of technology better than any other writer I've read.


Over time, my 'urban legend' bullshit detector has been refined, and is much improved from earlier in my life.

I have, on a few occasions, found myself relating some 'fact' or 'story' and realized immediately that it was likely an urban legend. Both times, the detector was right, and a quick search cleared it up.

I wonder if there is a good way to intentionally 'paw through' your repository on purposes, and apply your life experience to them on purpose.


very interesting


In practice, people rarely if ever revise their mental model as the world around them changes. Which is why a younger generation without this baggage inevitably rises to supplant the older one.


"It's not that old theories are disproved, their supporters just die out".

The best people adapt to new information, no matter their age.


The best people do, indeed. I'm still fairly young, but I'm starting to understand that "adapting to new information" is not something that happens by accident. "Continuous learning" is a lifestyle thing; I've never had to think about it. But forcing myself to re-evaluate things that I already thought I knew--that is a conscious and often difficult process.


The hardest part about this is accepting that you might be wrong. That your entire world view might be wrong, or one-sided. We humans have this uncanny ability to justify anything we just did, or said. It takes a lot of conscious effort to accept that the other person might be right.

But only if you truly realise that you are not infallible, can you learn new things, and continue to grow.


"Strong Opinions, Weakly Held."

Also, quotes are a good way of appearing wise.


Perhaps this is the function of conflict (ideally nonviolent and hopefully led by smart people), to speed up the process of change by injecting humanity's shared model(s) with new ideas. When older models become problematic, people have a duty to overturn them, toppling them like tables outside a synagogue.

"The extent of and continuing increase in inequality in the United States greatly concerns me. … It is no secret that the past few decades of widening inequality can be summed up as significant income and wealth gains for those at the very top and stagnant living standards for the majority. I think it is appropriate to ask whether this trend is compatible with values rooted in our nation’s history, among them the high value Americans have traditionally placed on equality of opportunity." Janet Yellen


There is something to what you say, unfortunately. Here's an interesting article from the New Yorker about it.

http://www.newyorker.com/science/maria-konnikova/i-dont-want...


I wouldn't say the younger generation doesn't have baggage. As a young man I had a lot of fears I don't have now. I think the biggest advantage to being young is taking on risk, besides the obviousness of physical strength, stamina, truly original thought(which seems to evaporate in the end of your twenties?). So yes, as young man I took on risk; some out of naviete, some out dump luck, some because I truly thought it was a good idea. As a poor youth, I realized I couldn't screw up too much/or be too risky. My friends who had wealthy parents could afford to take on a lot more risk than me, or my poor friends. There lies the rub--most wealthy kids have so many more opportunities than poor, or middle class kids. I just can't leave the unseen advantage wealth brings to a string of people and their spawn. In the county I grew up in I can count on two hands the kids who went on to be millionaires. They were all from wealthy families. So yes, the young do supplant the "older one", but they usually had a lot of help. An example is our current Luitentant Governer(you can guess which state). I saw his family pick up one failure after another, and he just might be president of the United States one day?


This is why it is imperative to build your model up from first principles. Before adding anything to your model, it should be filtered through proper logic. This makes it easier to build stable models without contradiction in the first place, and also makes it easier to follow a chain of reasoning about a complex viewpoint back to first principles. "Philosophizing in midstream" leads to incoherent thought with unchecked premises.

The method of loci, for building/encoding hierarchical "memory palaces", works well for remembering key ideas or facts and building upon them to compose your models. Also, using software like "The Brain" [1] and other mind mapping tools are useful aids for organizing information so you can easily go back to remember.

[1] http://www.thebrain.com


But what if parts of your model are based on 'feeling'? Like improvisation in music, for example, or art. There is no 'filtering through logic' here -- of course there is some element of technique, such as mixing of colors or how you hit a drum -- but ultimately it is all about a human element that is complex and hard to convert into logical statements.

It seems as if becoming proficient at something indeed involves moving more knowledge into intuition, same as how you had to purposely look at the rear view mirror and watch cars closely as you learned to drive but now it becomes something of intuition. You had to pay special care to syntax and to grammar initially, but now all of that is habit and you can concentrate on the 'design' of a program, or the 'characters' and 'themes' in a piece of literature.

What if these 'logical arguments' we make for things are simply retrofitted justifications on top of our feelings? Pro-choice 'feels right,' but of course that won't fly in court and so I'll make up some argument for it, becoming disingenuous thus not just to others but even to myself, distancing myself from who I am and causing some amount of strife within. I hope to think I am a perfect rational thinker, but am I really? I am driven by drives, by passions. Maybe I should become aware of this and proceed, acknowledge the person inside me rather than attempt to destroy it with reason. Maybe I should just stop and listen and stop trying to rationalize, both to others and to myself, stop thinking, open my eyes and see life in HD.


A 1990 quote from Heinz von Foerster, http://web.stanford.edu/group/SHR/4-2/text/foerster.html

"Only those questions that are in principle undecidable, we can decide.

Why?

Simply because the decidable questions are already decided by the choice of the framework in which they are asked, and by the choice of rules of how to connect what we call "the question" with what we may take for an "answer." In some cases it may go fast, in others it may take a long, long time, but ultimately we will arrive, after a sequence of compelling logical steps, at an irrefutable answer: a definite Yes, or a definite No.

But we are under no compulsion, not even under that of logic, when we decide upon in principle undecidable questions. There is no external necessity that forces us to answer such questions one way or another. We are free! The complement to necessity is not chance, it is choice! We can choose who we wish to become when we have decided on in principle undecidable questions.

This is the good news, American journalists would say. Now comes the bad news.

With this freedom of choice we are now responsible for whatever we choose! For some this freedom of choice is a gift from heaven. For others such responsibility is an unbearable burden: How can one escape it? How can one avoid it? How can one pass it on to somebody else?"


> This is why it is imperative to build your model up from first principles.

The main problem with this approach (I'll call it the Cartesian approach because it was most famously used by Descartes) is that human beings are less than 100% reliable at logical reasoning. If you make an error anywhere in your chain of reasoning, your conclusions are going to be off and there's not going to be any way to check them. It's just like writing 10,000 lines of code without ever actually compiling it, let alone testing it. You also develop a sort of foolish confidence about the correctness of your own beliefs, which makes it even easier to be wrong. If you make enough wrong turns, you become Ayn Rand.

That's why empiricism is so good. It's not that empiricists don't make mistakes too, but when they do, they find that they are surprised by concrete facts that they observe, and know when to go back and reevaluate.

Another helpful trick is to understand that there are degrees between 0% and 100% confidence. I can entertain an proposition as being possible or likely rather than simply true or false based on the recognition that I have incomplete information. If you tried to take this approach you couldn't derive anything logically because you would just have a multitude of possibilities in front of you. Formal logic only works with statements that are 100% true. Otherwise you're stuck with Bayesian reasoning, which is even more mentally taxing to derive information from.

(Or, as an alternative response)

Please derive for me, from "first principles", why it is imperative to build one's mental model from first principles.


> This is why it is imperative to build your model up from first principles

That sounds... time consuming.

> "Philosophizing in midstream"

Since you're picking axioms/first principles, isn't it all philosophizing in midstream to one degree or another?


It's known to be impossible to create a consistent and axiomatic set of knowledge [1].

http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_t...


That theorem only applies to deductive formal proof systems.


Do you really follow this? It seems like someone following these rules would rarely be wrong, but that they'd be paralyzed by the cost of making decisions.


From my own experience and observing others, that indeed seems to be the logical conclusion of it, and it's something that I see happen a lot. "Analysis Paralysis" a friend of mines likes to call it. It can be mildly annoying at times, but it seems like something that a lot of people aren't able to help doing. Or at least not without a lot of practice and experience.


> "Analysis Paralysis" a friend of mines likes to call it.

This is such a common occurrence in board gaming that it's frequently abbreviated as just "AP".

http://boardgamegeek.com/wiki/page/Glossary#toc9


Somebody following those rules could easily be wrong. Even smart, logically trained people make errors in logic all the time. That's why they introduce bugs when they try to write code, or errors into mathematical proofs or philosophical systems that sometimes don't get pointed out for years.


Great point - a possible explanation for a wide range of phenomenologically bizarre viewpoints (e.g.: anti-vaccers, birthers, chem-trails, creationists, etc.)


Most of your examples are social membership tests. You have to publicly say "XYZ" to join the club, where "XYZ" doesn't necessarily mean anything, but joining the club does. Like saying Abracadabra to open a door.

(Spoiler alert don't read till the 26th)

Do you believe in Santa? I believe in Santa. Oh how nice you also believe in Santa. Aren't we all happy members of our club? It would be so antisocial for someone to say Santa doesn't exist, because then how would we tell who is in our club? This is the best club ever. Santa? Who's that? We just talk about him to pledge allegiance to each other, we don't actually care about fat dudes in red suits.


Can you expand on this? Are you suggesting that most of the people who say these things publicly don't actually believe them but are instead just trying to fit in to a certain social group?


I'm not VLM, but yes, peer-pressure in another guise. Just think about your attitudes and beliefs on (a) handgun ownership, (b) creationism and (c) Israel. It's very likely that if you're in a social group with strong homogeneous view on one of these items, that there will be a matching "party line" on the others - even though they're completely unrelated. Chances are low that you'll argue the point, you'll probably just stay quiet - but many people will enthusiastically take that party line as their own.


I think that's just groupthink. People tend to act on their beliefs about things like vaccines, Israel, and handgun ownership.

Of course, it's one of the earliest results in psychology that social conformity has an influence on people's beliefs, even on readily accessible questions like whether line A is longer or shorter than line B. To me, if somebody professes a belief, acts on that belief, and works to convince others of that belief, then they effectively hold that belief.


There's always a danger that you'll introduce a bug in your code at some point, right? If you 'pull' from a lot or sources, at some point one of those sources will introduce a bug. But if you have a lot of high quality committers (book authors) and you continue to pull (read) from those sources, you stand a good chance of discovering and squashing these bugs eventually. In fact, as your code (mind) matures, you start to develop procedures to test unproven pull requests (theories/ideas) before merging and committing. You'll be especially careful of unknown authors or authors that have introduced buggy code in the past, while possibly giving full commit access to authors you trust.


You know this is just confirmation bias, right? This is another way of saying you're going to increase your confidence in sources that already agree with your existing mental model, which is the exact problem you're trying to solve!


"You can take my life, but you will never take my .gitignore"?


I wouldn't call that a danger. I would call that being human. We all have models of the world that deviate in some ways from one another, otherwise we would all have identical minds, identical environments and origins, identical heritages, identical ways of interpreting information, and identical ways of composing information into new creations.

Making an erroneous assumption or a mistake can mean that a particular sentiment is expressed "incorrectly", but that doesn't mean it's existence is useless. It may prove to be very useful eventually.

The idea that mental models can be incorrect by being compared to other mental models is a strange concept to me. It requires assumptions that can not be proven in their entirety.


>>It may prove to be very useful eventually.

Even if accidentally.


"I think that perhaps very squishy subjects like politics are particularly vulnerable to this sort of disconnect, where a complex viewpoint is formed based on the hot topic of the day, and this viewpoint persists for years or decades even if the basis for its formation is completely forgotten."

Not only disconnect: What I've read of psychological studies indicate the mind amplifies those facts that support it's current model of the world and actively ignores those that do not.

Which goes a long way to explain why it's nearly pointless to argue with fanatics of one kind or another.


In the analogy, it seems more likely that a mistaken short in the wiring is created, than we compile and lose the code all the time. Most of us have good ideas of the provenance of our prejudices (I am pretty sure I can remember which side of most debates arguments I get from Fox Mews and which from the BBC, even if the specifics are hazy.)

There are always areas where we do forget the provenance. In my case I stood up amoung friends in my thirties and announced that Ice cream was made from seaweed. After a few blank stares I remembered my father explaining to a much younger self on the beach that I did not want an ice cream - because they were made from that horrible floppy green muck. And he did not have to make a half mile trek back up the beach. Yet that "fact" and it's provenance stayed at the back of my brain for twenty years.

But even so the provenance was quickly recovered. I am not convinced we easily lose our source code. We can usually point to the module or package even if the line numbers are hazy.

But trauma does rewire us, and quickly, and is not to be discounted - I just think it is not the normal course of events.


While it seems your father had his reason for informing you of that fact at that particular time, he was likely correct[0]. ;)

[0] https://en.wikipedia.org/wiki/Carrageenan


I've accidentally gotten my wife into trouble this way because one of my hobbies is to tell elaborate lies and she can't always tell when I'm doing it.


"Let's not let facts get in the way of a good argument." :-)

Many times we tune out feedback that isn't congruent with our worldview and get locked in. That's why it's good to follow Paul's lead on rereading and revisiting. Sometimes it's hard to remember the anecdote or story that steers us, but finding it again allows us to question it. There was a specific retelling of a Vonnegut (of all authors) story that got me interesting in technology. Rereading the story in it's original form 20+ years later allowed me to understand the first better.


There's an opportunity to outsource mental models to software in a similar way to how we've outsourced basic math to calculators and basic facts to search engines.

A benefit to a digital mental model is that it could be updated based on new data independent of the owner which has all kinds of (possibly disturbing) implications.


I'd love to see the bug reports for that.

"Your program says I should now believe that it's morally acceptable to kill infants. I don't agree with its reasoning."


This reminds me of something I read of Objectivism. The idea that you revise your model of your knowledge as you learn new things. It seems like a difficult task given how our minds work.


I'm curious, do you think it would be beneficial to make a list of where you got your ideas from? Like a sort of list of citations for your model of the world.


It's even worse (at least with emotional political opinions); when confronted with adversarial data we tend to become more entrenched in our positions.


That's the fight or flight response. We view intellectual questioning of positions we hold as an attack on our person. Our rationality goes out the window, our aggression ramps up, and it becomes impossible to back down. It's the main causative agent for flame wars.


pg wrote about that as well: http://www.paulgraham.com/identity.html


That's why, always checking for the origin of things is very valuable, asking why we currently do things the way we do them :)


He did mention Stephen Fry discovering a "bug" in his mind related to childhood trauma.


An associated problem caused by a change in the mental model due to reading is the "Curse of knowledge" principle - which essentially states that "....better-informed parties find it extremely difficult to think about problems from the perspective of lesser-informed parties."

Once you have read something and your mental model of the world is adjusted to include the new information, you have a difficult time understanding why others don't see what you see. This is compounded by the fact - as highlighted by pg in the essay - that you also forget how and when your mental model changes.

This is one reason why not every expert is a good teacher - as they fail to see the world from the point of view of students.

But it is also relevant and useful to remember this in the world of startups. Established large companies routinely get disrupted by novice startups - often because the experts at the large company fail to see problems the way novices do. It is impossible to become an expert at something while continuing to view the world from the eyes of a beginner.

[1] http://en.wikipedia.org/wiki/Curse_of_knowledge


>better-informed parties find it extremely difficult to think about problems from the perspective of lesser-informed parties

Reading this made me think of poker. Calibrating to the skill level of lesser players is often very difficult for intermediate and lower-advanced players. Being able to synthesize the less sophisticated thought technologies beginners are using is surprisingly difficult. Failure to adjust often leads better players to play incorrectly against newbies. Anyone who has experienced the frustration of beating medium/high stakes cash games only to lose in home games with your friends for 1/1000th the stakes will know what I mean.


This is also why it is so incredibly important to treat everyone's input with respect and consideration - you don't always know what you don't know, or someone could give you an experience or new input that reframes how you think about a particular thing. I think this is one of the elements of human interaction that I value and enjoy the most - being able to try to learn how other people think, and what they think about, and sharing how I think and what I think about.


If you haven't seen it, S01E03 of Black Mirror, The Entire History of You, deals exactly with what PG describes at the end of this post: technology that lets you review and relive your past. It's very much worth watching, and was recently added to Netflix's library:

http://www.imdb.com/title/tt2089050/

http://www.netflix.com/WiPlayer?movieid=70264856&trkid=33258...


I just watched the whole series, and I wanted to go ahead and recommend that everyone watch more than that single episode (although I think it's one of the strongest). It's only 6 episodes total (each season has 3 episodes).

I think this audience would especially like the series:

'Black Mirror is a British television anthology series created by Charlie Brooker that shows the dark side of life and technology. Brooker noted, "each episode has a different cast, a different setting, even a different reality. But they're all about the way we live now – and the way we might be living in 10 minutes time if we're clumsy."'


There's a new episode being broadcast tomorrow evening in the UK.


Without seeing your comment I made the same recommendation to one of my lead devs a few minutes ago with the same segue, having also recommended this article.

Fantastic episode.


> Your mind is like a compiled program you've lost the source of.

Hence the important art of keeping a journal. You can keep a transaction log of changes. The act of replaying the journal allows you to identify patterns in your thought processes and identify cognitive dissonances. The very act of reading should induce a reactive compulsion to write.

As Burroughs taught in his later creative writing courses -- in order to become a better writer one must first learn to read (I'm paraphrasing here).

Part of becoming a better thinker is learning how to think. In order to do that one must catch one's self in the act.


I'm definitely a fan of taking notes on things you've read (among other things, the effort to write notes makes you choose what you read more carefully). And I agree that it will help you remember the sources for various ideas.

But I think a journal is the wrong model (i.e. time ordered entries, either electronic or paper). I have used a paper notebook in the past, but I would rarely go back and look at things, and it's not searchable, and paper is not editable.

For the last 10+ years, I've used a Wiki. Hyperlinks are huge. They really do model the associations your brain already makes. I have wiki pages that are 10 years old and that still grow new associations. I think it takes a big load off your brain to have all that stuff written down, and searchable with ease. (I had to write my own Wiki to get it fast enough though.)


Why not have both? For my technical journals I keep while working I use org-mode in Emacs. You can slice-and-dice your entries in many different ways using that package.

However when I'm reading specifically I keep notes in paper journals. First I find the immediacy of pen to paper to be intuitive. Second, and more importantly, the serial nature of the journal forces me to explain my thoughts. It is this specific constraint that allows me to see the process I went through to arrive at my current self.

It's not indexable or searchable, that's true. However I don't need the process of reflection to be fast and instantaneous. I have a lifetime to work it out. I don't mind reflection and introspection to be slow and tedious.


Perhaps it's not the most 'private' solution, but you might enjoy keeping a diary at www.dabble.me , you get an email everyday with an earlier entry (like from 1yr ago). It's a great source of relativation to track your old emotions and to rediscover old insights. I think you can run it on your own private server if you want to.

https://github.com/parterburn/dabble.me


I've heard a few people tell they use a Wiki. Do you host it on your own server? Is is a public wiki, or a private one? I'm very curious.


It's a private Wiki, which started out as a Python CGI on shared hosting, but is now a WSGI app on a Linode.

I started it 10 years ago, and side benefit was that writing a Wiki is a good project to learn about web programming. The first version was of course riddled with XSS and escaping problems :-/

I think writing a Wiki is still a good exercise now. I'm not a front end person per se, but every programmer should know something about the web. I'm always a little taken aback when I meet some back end guy who doesn't know how HTTP or the browser works.

And IMO there is too much bloated JS on the web now. I think people forgot how to make a simple web app with a form and plain buttons. There are too many fast-moving frameworks, so just doing it "raw" (or to WSGI) is a good learning exercise.


My editor of choice is Vim. Vim has the vimwiki plugin, so your wiki is always available. I write it in markdown and then use a small gem wrapper (https://github.com/patrickdavey/vimwiki_markdown) to compile to html. Then on a git-commit hook, I rsync everything up to a webserver. Works really well for me :)


I don't use it for this, but I host my own MoinMoin instance; on any linux machine enabling apache is a couple of lines of config, and then moin is just download, unzip, and add a couple more lines to the apache config (and make sure mod_wsgi3 is installed). It's an old and probably horribly out of date version by now, but that doesn't matter when I'm only allowing access from 127.0.0.1.


I host a private MoinMoin wiki on my own server, as a place to collect links as well as thoughts. It's a fairly easy setup, and allows you to make sure that you can write down even private thoughts (well, as much as you trust your own security setup that is).


I use OneNote for this, although I can't comment on whether it will scale to your level of 10-year+ use. I have just one page (which had to expand into a set of pages) which has the chronological order of what I plan to do today / did today. The rest is organized into Sections as afforded by OneNote, with hyperlinking.

Search is indirectly via Windows Search / Indexing. Definitely, there are a couple of bugs with that and I have had to delete and re-build the entire search index in the past.

The problem with hand-rolled solutions is that you may end up working too much on the tool instead of your actual work.


I only recently started taking notes in my own books and already see the flaws with doing so. I like that you approached the problem with a more direct solution.

I wonder about future technology when PG says "Eventually we may be able not just to play back experiences but also to index and even edit them." I also wonder about the iterations between now and then.

What do you think could be an intermediate step (company idea) between your wiki and something like a searchable memory catalog that would do what PG describes?


ConnectedText, an offline personal wiki with some auto-generated hyperlinks, http://www.connectedtext.com/manfred.php

1945, As We May Think, http://www.theatlantic.com/magazine/archive/1945/07/as-we-ma...


I'm not really sure what PG had in mind -- his comments are a bit vague, although perhaps a good starting point for a conversation.

I think if you just use this system for awhile, tons of ideas will come out. In fact you can generate a lifetime of work from it :) PG always talks about making something for which you are the customer.

EverNote is of course a company in this space. I used them for awhile; it's nice that they have iPad clients and the like. But I found their products to be a bit complex.

Here are some random things I want to fix:

- mobile access. In the past 10 years, this was a huge change. My Wiki is basically read-only on mobile devices. I have to go back to the desktop or laptop to really write anything. This is OK but I imagine there could be some mobile interface for typing or speaking and hyperlinking. EverNote partially addressed this.

- Data model for bookmarks. Actually I sort of poo-pooed the journal model in favor of a Wiki. But to be honest I've realized that a lot of things are links which I find on HN and the like, which need a date, "read" tag, and free form notes. It's basically delicious / pinboard, but with an emphasis on comprehension and connections to previous ideas.

- Unsurprisingly, I've found the need for a spreadsheet-like data model for finances, and certain kinds of research. Yes, I could just use a spreadsheet, but the hyperlinking and web hosting is huge. I use Google Spreadsheets now but would like something a bit faster and more under my control.

- I like having these notes as my data for all time. Delicious came and went in the last 10 years. A lot of the value for me is that it's personal, and not tied to any cloud service, which conflicts a bit with current business models.

- Search. Right now I have a fast full text scan with sqlite. I've wanted to write a script that would fetch bodies for all the links, put them in a full text indexer, and let me search quickly there. And maybe take screenshots with PhantomJS.

- Information curation. I have over 2000 pages now, and sometimes I probably forget to link back to stuff I should. I create duplicate pages by accident. This should be solveable with some hints.

- Stats on which pages I actually read. With 2000 pages, I need some kind of ranking now. Some are ancient/obsolete.

- Mirroring of my content hosted elsewhere... e.g. it would be nice to suck down my comments from HN and be able to search them later. I had a lot of good conversations on UseNet way back that I wish I still had :)

Not sure if any of these are good business ideas, but those are my thoughts :) A lot of them are partially covered by existing products. But the the thing I am suggesting is for particular programmers to build the particular thing for themselves. Everyone's preferred mode of information organization will be a little different. It's nice to have something you made for yourself. I think it's a good exercise, because you can start very simple and gets you in the feedback loop of product use / product design / implementation.

But it's possible it could lead to a company. For me it to led to some other technology, like a web server container variant of WSGI / CGI (both of these have problems). And backup / deploy stuff.

One problem is that companies are incentivized to go "wide" to acquire new customers. But I want to go "deep" into my own use cases. There are certain problems you only hit after you have 1000 pages. Probably 99.9% of EverNotes customers don't have that volume of content.


An interchange format will be needed for independently-developed deep apps.


Plain-text and a file system? See emacs' org-mode. It has many of the features parent listed.

update I'd even add perhaps the solution is really to take the best ideas out of org-mode and give it a platform-integrated UI. It does basically what everyone here has been asking for and more... it's just buried in Emacs which is definitely not for everyone.

Document sharing across devices is easy if you're comfortable sharing your content with a third-party like Evernote or Dropbox. I'd be more comfortable with something that's decentralized, encrypted, and versioned like a Git + Freenet backing store.


Yeah, for a long time my wiki based file system based, and the markup is plain text, so you can read it without any software. (I've since switched to sqlite for query flexibility.)

I hear about org mode a lot when I talk about the Wiki, but I'm not an emacs user so it's not as appealing to me.

Can org mode export HTML? At least for reading, plain HTML is the format that works best across all devices, and gives fast access. Writing is a different story.


It can export to HTML, PDF via LaTeX, and a couple of others.

Emacs is definitely not the best editor for everyone. It's a shame that org-mode is sort of a secret weapon of Emacs. It's such a great package that I think it could stand on its own.


Just to chime in here.. I _really_ like keeping notes on books now. I put them in my vimwiki, write it in (github flavour) markdown, and publish statically. Very handy.


Ah, vimwiki looks perfect. I've been swayed away from hosting my own wiki every time I considered it because editing is such a chore (opening new pages, delayed feedback, etc), but vim is the perfect environment for one.


Would you consider writing up your wiki design choices, and how they differ from the typical wiki?


I'm not sure it's that interesting, but I'll write a little here. I have used it for 10 years, so at least it's tested :) But only by one person.

I wrote a long comment above about future ideas.

I think the main interesting things are speed and and full text search.

The page loads in 150 ms, with the HTML loading in 75 ms (cold cache), using SSL. That's probably 5x slower than it should be, but it's also 5x faster than most products these days. To me, the speed makes a big difference. I decreased latency significantly 4 years ago, and I know my velocity of note taking has gone up, and it has paid off in terms of increased velocity/organization of the projects I plan with the wiki.

I think it took me 6 to 8 years to get to 1000 active pages, and then somehow I have active 2123 active pages now, after 10 years (although I do delete/archive pages, so this number is fuzzy).

Other features:

- I have a JavaScript "jot" button on my browser toolbar that submits the title and URL of the current page in the browser, and then opens up a form for jotting notes. Then this is appended to a selected wiki page.

- The notes are pretty heavily indented, outline-style, so the markup makes that easy.

- You can edit individual sections of a page, like Wikimedia.


I use Anki to remember and memorize the written info. Pretty effective.


It is really important to keep a journal. It is even more important to review and reflect using your journal.


I really enjoyed Paul's compilation analogy. It reminds me of a quote by Robertson Davies. "A truly great book should be read in youth, again in maturity and once more in old age, as a fine building should be seen by morning light, at noon and by moonlight."

It also reminds me of that MIT paper that gives advice on how to do research. The part it talks about why it is that when your colleague gives you a paper to read and says it's particularly poignant, but when you read it it doesn't seem like anything special. Maybe it's because your colleague had the dependencies in his state of mind that you did not have in yours, so it didn't seem as memorable to you as to him when the code compiled.


Yeah, his analogy extends to art pretty well. Let's say we use Tolstoy's definition of art, that art is about communicating a feeling to others via a medium.

There are times in our lives where our state of mind makes us more likely to be moved by a piece of art.

It's why you should revisit your favorite books from your youth - you'll often find the same words mean completely different things later on.


Care to link the MIT paper ?

edit: found it

http://web.cs.dal.ca/~eem/gradResources/MITAIResearch.html


Most people equate the term "memory" with what is more accurately termed episodic memory - little movies in your head. Most people can't remember when "Christmas" was first defined for them, but they can rattle off many things about it - the date, the religious meaning, the corporate meaning, etc. This is semantic memory, and together they form your conscious explicit memory or declarative memory (there are differences between the two that are not relevant here). The brain often throws away the episode but keeps the concept, and that is what Paul is talking about here.

But there's more to it than that. Your unconscious implicit memory includes things you can't even articulate. That's the difference between the date of Christmas and how to ride a bike: the latter is nondeclarative. Learning a different way to ride a bike, or approach programming, is even more difficult than recomputing semantic memory.

You can (and should) read a new books and gain new episodes to base your facts and opinions on. Read diverse material with abandon. But when learning something nondeclarative, like a weight-lifting technique, it can be well worth seeking out an expert and learning it right the first time. With nondeclarative memory, what you don't know can hurt you.

For more on the science and classification of memory, the Wikipedia page is as good a starting place as any.

[0] http://en.wikipedia.org/wiki/Memory


Why not take notes? Whenever I read a book I want to remember, I just pencil a dash in the margin next to any key fact, insight, or quote. Then after I'm done with a few chapters I retype these sections into a mindmap. It probably only adds 10 - 20% extra time, but you're getting 1000% more value.

In general what matters isn't how much you read, but how much you retain and what sorts of connections with past and future insight and information. It's important to have the full experience of having realizations and making connections while you're reading, which is why I just make a dash in the margins as opposed to taking actual notes in real time, but I feel like by not circling back later you're cheating yourself out of the true value of learning.

Especially since you have no idea if the books you're reading are even true or not until you vet the facts with primary sources.


The problem with taking notes is that you'd have to do so for literally every book or article or video you ever watched, and then you'd have to refer to it every time you remembered any facts from them.


This comment will run the risk of sounding condescending, but I believe it's true so I'm going to post it anyway.

Thought processes like the ones captured in pg's post are fostered by education in critical analysis--the sort of analysis that one learns in the humanities. Art, literature, philosophy, history, etc. are the products of human thought, and learning to critique them is in part an exploration of how humans think. Not the physics or neurology, but how influences can shape each person's mental model.

Part of this is exploring the influences that affected the mental model of the person writing or creating the art. Another is exploring the mental model(s) that the artist or writer sought to create. (This is what we experience when we "get into" a book.)

So, if you're looking for a reason that CS or engineering students should take humanities courses, I think one is illustrated in this post: it teaches you how to read books consciously. It gives you a framework for exploring how the thoughts of others (and therefore yours as well) are influenced and shaped by the information that is consumed during a lifetime.


> And yet if I had to write down everything I remember from it, I doubt it would amount to much more than a page.

This was reassuring to hear from someone else, because I've had this exact feeling about books I read, films I've watched, conversations I've had, work projects I've completed, etc. This is true even in cases when I was completely engaged in, for example, reading the book, and the book left a positive impression on me.

I've always felt guilty about this, especially when I see others who don't seem to have the same problem when they talk about the books they've read, etc. I've also found that recall can be greatly improved by repeatedly talking about the specific topic with multiple people.

The strange thing is that I have an excellent memory for certain things - information about people and relationships. In light of our evolutionary history as a social species, perhaps this is not so surprising after all.


I've found that when I reexpose myself to the same subject or book or whatever, I recall or relearn the information a lot more easily than I did the first time. A lot of times it was just in cold storage the whole time.


I remember reading something very similar to this (but can't remember where, ha), where it said the important thing about reading is how if affects your general thinking rather than the individual pieces of information that you're likely to remember (or not).

I spent several years reading a ton of different books on economics and I can recall very few facts from those books, but it did and has completely altered my world view of many things.

pg's analogy of a program where you've lost the source code doesn't feel quite right, because you can't make modifications to the program without the code. Some sort of machine learning model seems more appropriate, where you've lost the original training data but can still update the model later with fresh data (a new book), and end up with a better/different model, but then lose that training data again.


I think a machine learning model provides a nice version of Graham's "The same book would get compiled differently at different points in your life."

Using an artificial neural net analogy instead of a compilation analogy: "The same book would optimize your neural net towards a different local minimum at different points in your life."


While I was reading came to my mind the Borges's short story "Funes the memorious". It's about someone who can't forget any detail. He remembers absolutely all the things and the infinite instances of them through the time. At some point of the story Borges conjectures: "I suspect, nevertheless, that he was not very capable of thought. To think is to forget the difference, to generalize, to abstract. In the overly replete world of Funes there were nothing but details, almost contiguous details."


Great passage. As a digital artist who works with fractals, that really resonates with me. Visual fractal detail quite often converges to visual noise (and looks remarkably like a noise function as expressed on e.g. a TV set). I usually need to remove or de-emphasize that noise in order to clarify the direction and abstract intent of the work.

One of my favorite films that works along these lines is the 1998 Japanese film "After Life," in which a small party of workers attempt to recreate others' memories with very basic film studio equipment. I absolutely treasure the loss of detail in the various recreation scenes, and the way it suggests that there is actually a satisficing point at which we might realize, "yes, I'm actually reliving that memory right now." So I agree with Mr. Graham's conclusion that technology can bring this about.

On an unrelated note, PG's essays always bring to mind the Meyers-Briggs INTJ type. Essays about the annoyance of accumulating "stuff", a focus on abstract / intuitive learning styles, and clever writing which quickly establishes a theoretical framework which is then thrown against the world's (audience's) experience, rather than starting from first principles hoping to eventually reveal a framework as others might do. His seems to me very much a "systems thinker" approach.


I've noticed that whenever truly original thinkers encounter a problem, they'll quickly establish a workable model—even if it's known to be flawed or wrong—just so they can begin testing it “against the world's experience.”

(I had no idea this style of thinking was associated with INTJ types.)


My AI prof once wrote on the board "Learning is generalization". It makes sense: if the only way you're reasoning is by attempting to retrieve things from your cache, it's rote learning and you're not able to deal with situations you haven't encountered before. And even if your cache is super-extensive, all you're doing is overfitting.


I have an interesting take on this. Most of the books I've read, I have a copy of. A while back, I endeavored to cut, scan and OCR them all into my computer. One idea was then I could do a full-text search, limited to what I've already read rather than what google thinks is relevant.

So far, I've found it very handy to find something if I at least remember which book it was in. But I need a program that can extract the OCR'd text from .pdf files - anyone know of a simple one?

(I can do it manually, one at a time, by bringing it up in a pdf reader, but that's too tedious and slow.)


This is a great idea. Full-text search for "my knowledgebase", books I've read, thing's I've written, etc. is an area with potential that still seems unfulfilled.

Some ideas: - Apache PDFBox https://pdfbox.apache.org/ - command line: https://pdfbox.apache.org/commandline/#extractText - XPDF has a command line tool you can use in Windows - http://www.foolabs.com/xpdf/ - pdftotext - If you're going for accuracy, Tesseract is one of the most accurate https://code.google.com/p/tesseract-ocr/ - Apache Tika is often used the way you suggest: http://tika.apache.org/


Abbyy Finereader (paid, Windows) is one of the best OCR programs, most of the books on archive.org are OCR'ed with Abbyy.

If the PDFs already have OCR text, calibre (GUI or CLI, Linux or Windows) can convert to .txt and many other formats. The recoll.org search engine will index PDF files that have OCR text.


Thanks guys, this is great info!

(I just tried pdftotext, and it does just what I wanted.)


I will never understand why this guy's essays are so revered. I'm expecting some profound conclusion, but the only message of the essay is "reading and experience form mental models." Well, duh! Whats worse, he doesn't support his claims with evidence, besides a single anecdote.

Am I missing something here?


Well, Paul's essays aren't all meant to be mind-shatteringly revolutionary. He simply likes to share his thoughts and advice on a wide range of subjects that have helped shape who he is and what he knows, and the general public respects both him and his essays because a.) he's got the credibility to back them up, and b.) because it shows in his work and in his writing.

I personally think this essay is some pretty nice food for thought.


I have a problem with sharing every thought that pops in your head. If you don't have something new to contribute to the discussion, why add to the noise?

As for "food for thought," have you really never thought about how reading and experience shape your beliefs? I thought this was pretty basic stuff. Maybe I'm wrong.


What i found shocking here, is how casually pg talks about what i believe is the fundamental point of philosophy. That is the mapping of our minds inductive model of the world, and our deductive one.

>Reading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists.

Here, he is pointing out the the relevant information you perceive, your empirical data, is only retained insofar as it effects your deductive model of the world, that is, the model we use to determine truth and falsity. The rest of the data is generally trivial. This is a very sensible insight in my mind, and kudos to him. The dance between empirical data and deductive truth is one of the most difficult things for me to get my head around. This as a model for data retention is something i'd not thought of.

>Eventually we may be able not just to play back experiences but also to index and even edit them. So although not knowing how you know things may seem part of being human, it may not be.

Here, i find this problematic. In Soros's terms, the mind is reflexive. Thus, in reviewing the data, we are experiencing new data. If we edit our thoughts, do we not remember editing them? I don't see away to take away the reflexive nature of self examination, that in creating changes, we create new data about the changes.


One of the most significant books I ever read was A Walk Across America by Peter Jenkins. He graduated college in the 70's, wasn't sure what to do, and decide to walk from New York to the Pacific ocean. This book covered the first half of the walk; he wrote it while taking a break in Louisiana along the way.

That book was hugely influential to me. I graduated college and spent two years teaching. The summer after my second year of teaching, I had no obligations to anyone else for the first time in my life. I remembered Peter Jenkins' story, and decided to bicycle across the US. I knew I wanted to travel under my own power as he had done, but I wanted to go a little faster than he did. Bicycling was perfect for me. I ended up doing two cross-country trips over successive summers, and then I spent a year living on my bicycle, circumnavigating North America.

I reread A Walk Across America some years after doing my own trips. I was amazed at how bad I thought the book was. pg observes that

The same book would get compiled differently at different points in your life.

This is absolutely true. Now that I'm in my 40's, I'm going to go back and reread the most influential books of my 20's. I might even have to change my HN username after doing so, but I hope not.


May I ask what in your perspective on the book changed? What made you think it was so great the first time and so bad the second?


I don't like the flavor of this post. It feels very much like navel-gazing and, if it wasn't for the domain name, it likely would have been lost to /newest.

Where is the knowledge here? That we don't have immediate recollection of retained information? Knowledge is based on a beginning and ending context.


For any thought there is someone who will find it obvious, it doesn't mean we should discard it, instead we should consider it in relation to the the audience. It only takes a cursory reading of the comments here to conclude that this audience found it not just novel, but helpful in forming their own ideas about knowledge - a hallmark of a worthwhile piece.

Secondly, your summary of pg post doesn't do justice to the its contents. To me it's about the distinction between mere reduction of information, akin to jpg compression, and building a model from it. A model is Turing-complete, a jpg is not. The distinction is important, in that it directs the effort required to correct misconceptions later in life.


Which makes me think of a saying. One that I remember. "Sophia Loren without a nose is not Sophia Loren". Here's another one "if my grandmother had balls she'd be my grandfather".

The point is PG wrote it so just like if anyone of note wrote it it would be more of interest than the same thought from anyone else.

After all these are analog thoughts and subjective this isn't science.


When a renowned writer writes something weak, it doesn't give credence to the piece, it takes credence from the writer.


I'm also surprised people actually find the contents of this blog post insightful, but I suspect it has far more to do with who wrote the article.

I mean, is it not obvious that you can take away new ideas from reading a book multiple times at different stages of your life? As a simplified example, movies with twist endings hinge on exactly that fact -- armed with new information, events you have already experienced take on new meaning. More "important" things will have more significance, but it's the same idea.

Is it not obvious that your own world views are the result of your own experiences and others who you have contact with, even if you cannot precisely remember everything that would lead to that world view?


No, those things aren't obvious FYI.

And if your summary was "boy, new ideas from different stages" you kind of missed it.


It's not that you forget the content, it's that you forget how to phrase it concisely. If the author needed 70, or 200 pages to explain a concept, and you can at some point raise your hand and claim 'I get the point', it's not reasonable to expect a 12-word summary. What do I remember? Hard to put into words. Likewise, it's not reasonable to expect a perfect memory, reciting paragraph after paragraph of the original text.

If you really can summarize a book in a sentence or two, wouldn't the author have done that already?

Maybe it's time for me to reread Cryptonomicon. There are parts of that book I have absolutely no memory of, flipping through it, yet other parts I remember all too often (bicycle sprockets, comets of pee, bisecting alligators, van eck phreaking).

(also... > seige warfare ?)


> If you really can summarize a book in a sentence or two, wouldn't the author have done that already?

Not only is this possible to do, but it's often done. The problem is that it's not necessarily useful or sufficient to hear a mere summary of something.

For example, let's say I tell you that "Idea X is important." That's a simple idea, right? It only too me four words to express it. But do you believe me? Probably not, because I haven't spent any time or effort convincing you that idea X is important. And do you understand what idea X is? Probably not, because I haven't spent any time or effort explaining that. Etc.

Even if you can summarize it, you probably need to write the entire book for people to get the background information necessary for them to find your summary useful, otherwise it will go in one ear and out the other.


I'm gonna go out on a limb and say this has absolutely nothing to do with what PG was referencing in the article. I'm guessing he could at least give some broad overview of the book he's referencing. The issue seems to be that it seems so small in comparison to the book itself.


Well, exactly. The book as a whole accomplishes much more than a brief summary of it does, which is why it feels bad to lose all of that additional information.

I disagree with the parent that you still remember the content but can't summarize it concisely. I believe the opposite: you forget the specifics but retain the ability to summarize them.


This essay reminds me of a NY Times essay: http://www.nytimes.com/2010/09/19/books/review/Collins-t.htm...

Interestingly, if shown a series of hundreds of images, we wouldn't remember many in the list. But if we're shown alternates (was it a goldfish or a watch?), we would instantly recognize the item.

We didn't forget, we just couldn't access the memory on demand. The conclusion is the same: it's there, influencing us and adding to our lives, even if it doesn't feel like it sometimes.


I've been thinking about this very issue recently, and coincidentally started working on software two days ago to help manage the problem of remembering things that I've read. Obtaining information in 2015 is remarkably easy. Retaining it is damn near impossible, at least for me. I read books and bookmark links from hn and reddit on a daily basis, consuming constantly. But I find that I recall very little of it. I don't know if Stephan Hawking was right about black holes destroying information, but my bookmarks folder comes pretty close. Links go in and then are never seen or heard from again. I take copious book notes and type them up, only for them to be consigned to the void of my hard drive file system. I've tried evernote and anki and several other tools, but it's always a one way ticket. My trouble isn't remembering what I've read, but remembering to remember. No matter how I've tried, I can't change my daily work flow to set aside time to review the notes and information that I've already collected, rendering it useless.

If I had a magic device that recorded all of my experiences, it wouldn't do me much good because I'm too busy collecting new experiences to be remembered. It would be great to be able to search for details and trivia, but I wouldn't have time to peruse the archive to refresh myself about things that I had forgotten completely. Much in the way that google lets us search for and recall anything, except the things we don't remember the name of.

I'm going in the direction of reminding myself about things that I previously read or bookmarked, especially as they tie in to what I'm currently reading. I think one part of the solution is to display existing bookmarks and typed up book notes to myself in a near random fashion. It's not the most sophisticated solution, but at least they won't be lost and I'll have a chance of reconnecting with something and establishing more anchors in my memory. I think a plugin that relates past content to the current page might be a good idea, ie for this page I could see any previous bookmarks that involve memory and retention. And generally reminding myself to review things I've already learned, even if they don't seem relevant at the moment.

I don't have any great ideas yet, but I've been coding like heck for the past few days to try to take small steps toward a solution. I've been on a quest to make my brain work better, and this essay has definitely given me some ideas and helped to push me along.


> It's not the most sophisticated solution, but at least they won't be lost and I'll have a chance of reconnecting with something and establishing more anchors in my memory.

Go for it; simplicity works. I've pushed a few of my favourite passages to a simple web page which I can flip through randomly (http://www.jasimabasheer.com/amateur_reading/serendipity.htm...). As a bonus I can also link to it when relevant discussions come up.


Google's +1 is pretty cool in this respect. If you +1 articles you read, their search scores will be boosted for your subsequent searches, so you'll be more likely to stumble upon them if you are looking for something related (but have forgot about what you read).


Schopenhauer said it first and better:

"However, for the man who studies to gain insight, books and studies are merely rungs of the ladder on which he climbs to the summit of knowledge. As soon as a rung has raised him up one step, he leaves it behind. On the other hand, the many who study in order to fill their memory do not use the rungs of the ladder for climbing, but take them off and load themselves with them to take away, rejoicing at the increasing weight of the burden. They remain below forever, because they bear what should have bourne them." -- Schopenhauer


I am in middle of reading a fascinating book that discusses how the brain, processes, interprets, and retains information as it passes from your Sensory Information Storage to your short term memory to your long term memory, as well as how you retrieve information from your long term memory. (The Psychology of Intelligence Analysis)

The book was written as a guide to CIA Analysts to understand the limitations their own filters and mental models place on new information they process.

One important point that I find applies to this essay is that the way we retrieve information is through schema that associate various memories with each other. Creativity is about mapping new pathways through your memory or applying other patterns and schema on top of existing memories.

So, reading a book a second time, or even "forgetting" what you read, can not only give you new patterns and schema to apply to your other mental models and memories and stimulate creativity.

I highly recommend everyone read The Psychology of Intelligence Analysis... or if you want the abridged version you can read my brief recap. http://www.davidmelamed.com/2014/12/05/internet-marketers-ci...


This goes both ways.

Sometimes I avoid rereading books for the same reason. There was a Summer where Catcher in the Rye felt very important. I'd hate to reread it under my new, adult perspective. I'd prefer to let it linger in nostalgia.


I had the same same thing with the much less erudite Stranger in a Strange Land. Re-read it 20 years later and was disappointed. On the positive side, I should consider how much I've moved forward in that time :) or sideways :-S


The nice thing about Heinlein is he has a book for all stages. As I've gotten older I really prefer his "Time Enough for Love" novel, which is about looking back on a long life.


The same thing happens to me with video games on a regular basis. Younger me simply had much lower expectations and poorer taste, back when a NES controller (with all of four buttons and a D-Pad) sufficed.


A few years ago I started an annual tradition, where around the new year I'll re-read the Pragmatic Programmer. This year will be the third year, and I expect I'll continue to gain new insights from future rereadings.


How long does it take you to get through it? I struggle with reading non-fiction books, and gave up at around 100 pages in last time I tried it. Sucks, cause I've heard that it's one of the more readable books go, in terms of programming manuals.


Under a week. I've coincidentally also had off from Christmas-New Years the past three years, though, so I have plenty of free time. I also read a lot of nonfiction, and read a couple of technical textbooks every year.


This is great essay. Few points:

1. The more correct analogy would be training data and machine learned model rather than source code and compiled binary.

2. Lot of people move from book to book, always reading some book at any point in time. This provides great dopamine hit to brain and keeps boredome away. However one need to reflect on what they read to gain any significant "take always". The act of reflecting enforces recall which in turn induces analysis and memory storage. I try to create a book review to formalize my reflection process after I complete a book.

3. These days I also get digital (usually pdf) copies of most books I'm reading. This allows me to use tools like GoodReader to highlight striking statements and make notes of my opinions as I read along. You can sure just use pencil and margin of book :). This habit has rewarded me greatly because it makes me take a pause and think about what I read. I can come back to book anytime and refresh it 10X faster. It's also fun to know what my opinion used to be on some of the things years ago.


This makes a lot of sense. However, a machine learning model + data analogy feels more satisfactory (and accurate?) to me. We throw away the data but our model as well as its parameters are retained. The model as well as the parameters get refined with experience and it's possible that the model is recursively made up of multiple models and that combination is governed by parameters which are also governed by experience. Realizing this has always been fascinating to me and it makes it clear that exposing the brain to more data in pretty deliberate ways can yield profound results. The "smarter" you are the less data you need. If you are not so smart/knowledgeable you need more data. The data you need is also specifically of the question-answer form i.e. examples of what it is you are trying to learn. Anyways before i go down the rabbit hole, I think these metaphors are extremely helpful for the purpose of self analysis and improvement.


"The same book would get compiled differently at different points in your life. Which means it is very much worth reading important books multiple times. "

I also find that this rings true with movies as well.


I experienced this phenomenon starkly when I started a book, and felt it was filled with obvious remarks and little novelty. It took me 50 pages to realize I had already read it.


This is the primary downside of reading without a bookmark. Any book I've read on public transportation, I would guess on average each page gets read about twice (with some chapter intros reaching double-digits). It's hard to figure out if it's really something you've read before, or if it's just a very logical continuation of what you read yesterday.

I also feel obligated to one-up you. One time in high school English class, the teacher put a sample student essay up on the projector and picked it apart in front of the class. It took me half an hour to realize that I had written the essay in question, and by that point I had concluded that I did not completely agree with it. I learned something that day about writing.


>Any book I've read on public transportation

I suspect this is also a key part of your inability to determine whether or not you have read a given page before. You are reading in an environment that does not lend itself to creating notable memories. This is purely personal conjecture but I suspect that if you went and read somewhere more interesting your memory of what you are reading would magically improve.

Books I read while at home or similar tend to disappear into some sort of memory hole. Meanwhile books I have read while visiting other countries tend to be easier to recall, both in terms of the book's contents and the circumstances I was in when reading it.


Weird. I always notice from the first sentence or so if I've already read a page, so it's relatively quick to find the page I was at with binary search...


that is a great story!

I saw the book at my friends house and thought it looked interesting. After borrowing and reading 50 pages I realized I had it in my shelf.


"What use is it to read all these books if I remember so little from them?"

Well for one things it gives you pleasure. You could also say "what's the point of listening to music" or "what's the point of watching comedy". Other than pleasure, as a generality, you might read because it makes you feel good to do so.

I find that a key to good mental health (that works for me) is not to question what harmless things make you feel good and why. If I did that it would make me unhappy. Just go with the good feeling.

One thing that I'm sad about is that I don't get the same pleasure that I used to from browsing books at Barnes and Noble. With the internet there is to much to read already. I don't find the same utility that I used to from books that are essentially a single perspective (at least the ones that I used to buy, non-fiction).


He wasn't actually questioning the value of reading. That question was only a rhetorical device that led into his main point. It was put there as a straw man to start the process of disproving that there's no value to reading if you don't remember details. And that it's unnecessary to lament not remembering details.


This theory explains why many smart people also read a lot. Not necessarily books or even any body of text, but reading as a generic behavior to find meanings in various objects and events in life (Books, I feel, is just one of the easiest things to read). Because they read a lot, they have broader and deeper mental models, which they use to model new/different things/events well.

Also, in my experience, what sets the smartest people apart from "just" smart people is their ability to retain the how: not only do they have broad and deep models, but they also know how these models are built and can adapt them quickly as they acquire new information.

Most people need to run a disassembler of their compiled thoughts, and after a certain point in life, their binaries are so bloated that they can't decompile them at all.


Having read Carl Sagan's Cosmos in my youth, I always feel guilty re-reading a book:

“If I finish a book a week, I will read only a few thousand books in my lifetime, about a tenth of a percent of the contents of the greatest libraries of our time. The trick is to know which books to read.” ― Carl Sagan, Cosmos


Don't feel guilty, Sagan's quantitative approach is a teribly shallow view of reading. If you feel you want to read something again it's because you expect to get something out of it - perhaps to refresh your memory, perhaps to pay more attention to the subtext of the work, perhaps to study the author's literary or rhetorical techniques. You wouldn't assume that you had learned everything about a complex musical composition or painting from single listening or viewing, why assume you've learned everything worth knowing from a single reading of a book?

The only reading I ever feel guilty about is my aversion to leaving a book unfinished. I'm pretty good at picking what to read, but about once every year or two I encounter some real stinker that is a literal waste of my time, and I feel a bit annoyed with myself for plowing through to the end even though I have long ceased to expect any literary or intellectual payoff.


I think it was Thomas Hobbes who said "If I had read as many books as other men, I should have been as ignorant as they are."


Kind of a tangent but I love this quote of his.

"A book is made from a tree. It is an assemblage of flat, flexible parts (still called 'leaves') imprinted with dark pigmented squiggles. One glance at it and you hear the voice of another person - perhaps someone dead for thousands of years. Across the millennia, the author is speaking, clearly and silently, inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people, citizens of distant epochs, who never knew one another. Books break the shackles of time, proof that humans can work magic."


    Reading and experience train your model of the world. 
    And even if you forget the experience or what you read, 
    its effect on your model of the world persists. Your 
    mind is like a compiled program you've lost the source 
    of. It works, but you don't know why.
I often feel like that, but much more in regard to people than to books and experiences. It's strange, how much others have formed me, but how little I do remember about them. How little I remember about my parents, but how big a part of my compiled program they are.

Of all the time I spend with my daughter, of all the activities we do, she probably will not remember much in a few years, but at least I can hope it will have an effect of her model of the world that persists.


When I was a freshman in college, I was in a humanities class that focused on the intersection between humans and machines. We had an assignment to build and test out a "prosthesis", i.e. a technology that extends human capability, in Second Life.

I created a simple wristwatch accessory that was scripted to upload a copy of all of your chatbox text to an external service. Later, you could log in to this external service and search through a history of all of the conversations your character ever overheard.

Real-world versions of this technology appear inevitable as digital storage costs trend to zero. A rudimentary digital copy of the physical world is being created in services like Google Maps. The Google self-driving car records a 3D copy of its surroundings with accuracy at the centimeter level. Dropcam uploads video and audio data from within your home to cloud storage.

A world with fully recorded life experiences seems creepy at first blush, but I believe we'll discover a mechanism for trust that will allow everyone to safely record a digital copy of their lives that is inaccessible to third parties. Perhaps in the future we'll each own an open-source private cloud container of CPU and storage resources. Instead of processing your data on external servers, third-party services might provide code that runs in your own container under tight network permission restrictions. Such a system might be able to maintain the benefits of continuous software deployment while allowing consumers to keep their data under their control.


The problems is figuring out which books provide those useful mental models. I found that fiction usually doesn't but a list with recommendations in the comments would be great.


A good article on the importance of fiction from a scientifically validated point of view: http://www.bostonglobe.com/ideas/2012/04/28/why-fiction-good...

Fahrenheit 451 - Ray Bradbury L'Etranger - Albert Camus Frankenstein - Mary Shelley Metamorphosis - Ovid Oryx and Crake - Margaret Atwood The Picadilly Papers - Charles Dickens Permutation City - Greg Egan

Fiction allows us to experience the most intimate thoughts of people we've never met in a way we cannot emulate in reality. We can visit places we've never been to and experience situations we'd try our best to avoid. We sit for hours hallucinating vividly reading these stories as we download these characters, concepts, and ideas into our meat. And if the story resonated with us we walk away a different person: new connections in our synapses, reinforced signals in existing ones. Stories are one of the most powerful tools we have at our disposal; perhaps even more so than mathematics or computation.


I agree that good fiction book can be an eye opener. I'll give the ones from you list I haven't read a try. By non-fiction I didn't only mean science but also historic novels and biographies. I find they come with connections and lessons that no human could have come up with.


Try the Penguin Classics hardcovers. Frankenstein for example[0] includes an amazing abridged version based on the original manuscript and later revised editions as well as a very engaging historical account of the author's life, times, and influences when she wrote it. These are also important things to understand about a story as well and can give deeper insights into its inner structure.

[0]http://www.penguin.com/book/frankenstein-by-mary-shelley/978...


The Conquest of Happiness - the greatest men had the same problems we all do and managed to be great in spite of it

El Aleph - how small we are in this universe, and much much more

Steve Jobs bio - how to focus


Neal Stephenson's Anathem felt like the most accurate capture of what being a Cambridge mathematician felt like.

The thing that made a lot of history make a lot more sense to me was playing Civilization. I don't know how accurate the details are, but the fundamentals of diplomacy don't change a lot.

Honestly most nonfiction feels like it was telling me what I already knew, or bringing new facts but no new ideas. I can't remember any that actually changed the way I think.


This raises two interesting questions to me:

1. Are technological advances required for re-living experiences? Wouldn't (some forms of) meditation achieve similar results? Personally, I have found myself remembering many past events, a few days into the start of meditation.

2. When we re-read books, we often choose to re-read those that we liked. But could there be some benefit in re-reading books that we didn't like (and surpasses a minimum threshold of quality).


I strongly advise pg to go find a DVD of Black Mirror a horribly under-rated TV series from the excellent Charlie Brooker - the one on replaying ones memories suggests just reading will be a lot less troublesome !

I do suspect we will be less likely to record our lives for later playback than have them analysed at or near the time for feedback on how to improve. Twitch TV is (I am told) full of streams of top rated people playing WoW and commenting on their actions (so others can learn, or be entertained). It's probable that there are shows now or soon that have players commentating on other players streams, and a fairly short leap from that to commenting on videos of me training my dog, or performing reps or basically anything in the life coach / therapy repertoire.

Audio and visual analysis already allows therapists to zoom in on the important parts of observed patients (certainly in sleep therapy) and will only get better.

Whilst the unexamined life is not worth living, there is no reason you have to be the only examiner. We shall all have our own life long therapists.


>go find a DVD of Black Mirror

It also recently made it to Netflix. I've been hearing about it all over among my friends the last few days as they've been watching it there (and watched the first couple of episodes myself.)


I hate this: "What use is it to read all these books if I remember so little from them?"

Because reading is enjoyable?


Did you read the article? There are multiple reasons for reading, and though many of them feed into enjoyment, it's not always the end.


I did read it, and found it trite and obvious. My problems with it were essentially summarized by the particular sentence I quoted.


I found it obvious, which pg admits, until this quote: "The same book would get compiled differently at different points in your life."

A nice analogy, as others have pointed out, and that is the thirteen-word summary I'll remember from this essay. Not bad, considering most thousand-page books will be compressed down to a page of take-away memories.


> Intriguingly, this implication isn't limited to books.

We can see it clearly with the functional paradigm renaissance right now. The same arguments already existed 40 years ago, but _something_ changed recently where the perception of some people toward functional paradigm completely changed.


Probably because we hit a threshold where dealing with mutability started to get unmanageable.

Another area of interest is "microservices", for the same reason.


Perhaps because OOP was the big thing for so long, younger people learned on OOP. Those arguments from 40 years ago were either not taught or not emphasized. As a result, functional seems fresh and interesting. Throw in a few problems that become trivial with it and people get very excited and it becomes the "new thing".


I started reading books about an year ago - read about 16 this year and I am not sure if it just a phony hunch or a real thing but I feel reading has helped me a lot in programming.

I am able to grasp things pretty quickly, I am able to link two different things to get ideas to solve problems, and also, I have grown more confident in approaching challenging problems.

Albeit, with anecdotal evidence, I believe, taking interest in wide variety of fields may not give immediate benefits but it helps you in ways you don't imagine. The very fact about which I used to worry - not focusing on specialising and hoping from one thing to another, is what I think has helped me grow my skills in programming, in general.


I think this is part of a larger point. Books aren't just collections of facts. Deleuze and Guattari perhaps said it best in the introduction to "A Thousand Plateaus" - "A book itself is a little machine..."

They then go on to say: "We will never ask what a book means, as signified or signifier; we will not look for anything to understand in it. We will ask what it functions with..." Books are machines you plug into your understanding of the world and they either have an effect on you or they have no effect at all. What and how a book plugs into your understanding and works on it is more important than the content of the book itself under this view.


> "when Stephen Fry succeeded in remembering the childhood trauma that prevented him from singing"

Does anyone know what this is referring to? Searching for "stephen fry singing trauma" doesn't return anything useful except pg's essay.


he seems to mention it in his autobiography

http://www.amazon.com/The-Fry-Chronicles-An-Autobiography/pr...

(search for "singing")


Interesting, thanks. Somehow it seems that Google didn't index up to that page.


This is a really interesting perspective on cognition, and it all kind of makes sense if you consider the brain to be a black box pattern recognition machine with various built-in biases.

New data is always added to the model, but not in an entirely rational fashion. The updated model is likely to slightly overfit new data ("compiled at the time they happen"), and particularly salient bits and pieces of old data (see https://en.wikipedia.org/wiki/List_of_cognitive_biases) are disproportionately weighted.


'Reliving experiences' is part of the Exposure Therapy that is used to treat PTSD. I remember watching on NOVA or some science program how virtual reality was being used to treat veterans suffering from PTSD. By reliving a dangerous situation in the VR world, they are able to 'recompile' the program in a safer context than it actually happened.

EDIT: Found the link http://www.nami.org/Content/NavigationMenu/Top_Story/Using_V...


I've read other takes on this interesting subject as well. They too, convinced me reading is worthwhile despite our memory limitations.

http://www.nytimes.com/2010/09/19/books/review/Collins-t.htm...

http://www.newyorker.com/books/page-turner/the-curse-of-read...


I recently met some people who swore that photoreading was legit and that because of it they'd read about 10 thick books per week. Not knowing what it was, I looked it up and immediately didn't believe the premise. There's something to be said for speed reading (at a rate slightly faster than normal) but photoreading just seems ridiculous. Not only can no real content/meaning be gained from doing it, but no mental models can be formed. I'd love to be proven wrong, though...


I once read a about a small scale study on some super-duper speed reading method. By small scale I think there were like two participants. One was a an expert in the method. The other was the guy doing the study. There were 3 main findings. 1. Yes, you will "read" much faster. 2. If you take a standard reading comprehension test on what you've read, you will score much lower. 3. If you don't take such a test, you will be under the mistaken impression that you absorbed more than you did from the book.

http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/2000001...


>Reading and experience train your model of the world.

This sounds convincing but then an argument against reading fiction follows since fiction trains your model of the world with fake data.


Hmm this is a very interesting point, but we would have to dive a bit deeper and acknowledge there are many different types of fiction.

I would argue that certain types of fiction may actually be beneficiary such as To Kill a Mockingbird or Catcher in the Rye (vs something like 50 Shades of Gray). Those sorts of fictional settings allow the writer to present the experiences and specific feelings/situations to readers. Even though, say Lord of the Rings, isn't exactly relevant to our present day world, the morals and spiritual emotions involved reflect humanity (what makes a human?)

I guess your comment has some parallels in art: why should we draw abstract things when we can recreate what we see? Isn't recreating what we see more important/better art than something that would never end up existing?

While it's not a perfect analogy, I believe reading and looking at the creative arts ultimately benefits your model of the world through bettering your model of humanity.


Fiction could be viewed as "fake data" or "possible data".

The scientific method depends on the generation of testable hypotheses.

How do we generate hypotheses?


Learning is not limited to events, it also includes approaches, backgrounds, attitudes. Those are all reusable. But yeah, if you read a "fact" one too many time you will take it to granted despite fictitious provenance. There is danger in that.


Real data can train your model of the world falsely :

http://en.wikipedia.org/wiki/Lies,_damned_lies,_and_statisti...


I am a firm believer in actively curating a mental model. At work I could be accused of over-communicating and I demand the same from those around me. I do so because my goal is to help refine the mental model of myself and all my coworkers so that everyone has a better intuitive understanding of the system/process/organization we're working with.

By this process I've been able to internalize much of a massively complex system (SAP) in a relatively short period of time.


> Eventually we may be able not just to play back experiences but also to index and even edit them.

Like most things this may have unintended consequences. I think our ability to forget is an important "feature" of cognition. What would happen if we were unable to forget even petty squabbles between friends, loved ones, supposed enemies? How far could this escalate? Our ability to forget and put things behind us may be the reason we're still around.


I'm not an expert but I'm pretty sure it's quite important. Autistics I believe have lost some of this filtering (forgetting).. non-autistics like probably you and I capture our environment photographically but ditch the things our brain thinks is unimportant


I'm just really happy to hear that I'm not alone in my anxieties about needing to re-read books and my inability to remember everything written in a book!


> e.g. when Stephen Fry succeeded in remembering the childhood trauma that prevented him from singing

That sounds fascinating, does anyone have a reference for this?


Knowledge is an interesting subject. When I read, I don't remember the exact order of words. Especially in the age of Google, we have a choice of what to burden our memory with and what to leave to Google. Are names and dates important? What's important is to have models of how things work in your mind. It is through the process of reading that we develop and refine these models.


Approximate dates are important to find connections with other contemporary events, and more broadly to put things into the context of the prevailing culture at the time.


I don't like the use of the word "compiled". It's more like a program that modifies it's source at runtime. This reminds me of how JavaScript used to be simply interpreted but now with V8 (and friends) the hottest paths are optimized at runtime so that the result is more performant than any compilation because you have more information than you do statically.


You're describing JIT.


I experience this phenomenon powerfully reading scientific articles. You read a bunch of articles when you are trying to wrap your head around some topic, but if you then go back and read them again after you've worked on it for a while, you'll find all kinds of things that now are very meaningful while they previously didn't seem important to you.


The idea of compilation without the source is a great way to put it. A beginners mind is a good way to look at things and learn. With a beginners mindset when needed, maybe temporarily you can recompile portions to take down the filters and walls in your current binaries, with that mindset maybe you can update and refresh from the basics.


I also think that its equally important to reread books that gave you great insight. A few years later with more experience and knowledge you derive more from it.

On first read you have a few key points and years later sometimes those end up knitted together forming a greater insight that eluded you previously.


I think this is the phenomenon that forms the basis for Gladwell's BLINK.

It took me a while to learn that almost everything I have heard or seen has already been stored. The problem of memory is in retrieval.

This also applies to creative work. When you have seen quite a lot of things, it ends up influencing stuff you could swear was original.


“I cannot remember the books I’ve read any more than the meals I have eaten; even so, they have made me.”

― Ralph Waldo Emerson


"Reading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists. Your mind is like a compiled program you've lost the source of. It works, but you don't know why." -- love this


I think one of the points from this essay can be made that "content" discovery is going to be a hot problem to crack if it is not already has been. Because good discovery mechanism leads to more user engagement that ultimately results in creating products with lasting impact.


Fascinating. A corollary would be: be careful what you read and be critical of what you read. There is danger of manipulation, particularly by others. The plus is that you could manipulate yourself, change personal perceptions and mode of thinking.


This helps alleviate the fear of potentially "wasting time" if a startup or project we're working on doesn't take off. Either way, the things learned while undertaking the endeavor will affect our mindset, usually for the better.


Regarding the retention from books, it's often the case that I could fill a dozen pages with correct answers to questions that I only know from having read a particular book, even if I could only fill one page with unprompted recollections.


There's a quote which I (ironically) can't remember about how a mind is made up of the books it has read in much the same way as a lion is made of of the animals it consumes. Or to put it another way, "you are what you eat."


Not for nothing, the laying of new memories in the brain is exactly context sensitive. The hippocampus is actively weighting new information based on what we already know. And emotion hacks the hippocampal patterns that much more.


It takes a deeper understanding of the mind and how it works to grasps these things.

When you learn about the four aspects of the mind and how each plays a role in your outlook, then you have the key to this "mystery".


" The same book would get compiled differently at different points in your life. Which means it is very much worth reading important books multiple times." -- I loved this statement most.


See also, the psycological litterature on source amnesia https://en.wikipedia.org/wiki/Source_amnesia


The funny thing is I had just posted on hear to ask about how others help improve their reading retention, since I've been feeling bad about forgetting thing after I read them.


Anybody know how to rss his feed? i try with http://paulgraham.com/rss.html but seems not a full article.


On some level we don't really forget - when you reread a book it comes back to you like a stream, all the insights and connections appearing stronger and clearer than before.


"Your mind is like a compiled program you've lost the source of. It works, but you don't know why." Nice.


"Your mind is like a compiled program you've lost the source of. It works, but you don't know why."


TL;DR: a healthy brain accumulates wisdom, but won't bother archiving the sources of that wisdom.


> "a perfect formulation of a problem is already half its solution."

Is that simply rubber ducking?


So apparently the metaphor that our brains are computers is still used? And that they "compile" experiences (though we don't understand how).

What if our brains are not easily shaped? And maybe our brains are good at forgetting experiences?


It's deeply heartwarming to see another “OG” PG essay enter the canon. (Obscure reference to historical French prose in the introductory sentence? Check!)


What is meant by "OG" in this context?



Original gangsta.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: