Hacker News new | past | comments | ask | show | jobs | submit login
You don't understand something until you think it's obvious. (mebassett.blogspot.com)
139 points by mebassett on June 20, 2011 | hide | past | favorite | 71 comments



And now that it's obvious to you, it's suddenly really hard to explain to someone else.

My friend was TAing a freshman calculus class in college while working on his PhD. He clearly remembers that his (and everyone's) biggest problem with calculus was limits. Hands-down, limits were the most frustrating and un-intuitive part of the class. So he sat down to try to figure out a good, intuitive explanation of limits that would help his students avoid the frustration that he had.

He immediately ran into the problem that they weren't hard to understand. They were clear and intuitive. He could not for the life of him remember why he thought they were difficult when he was first learning them. He asked me for help, and I had the same response. I remember clearly banging my head against them and screaming in rage and frustration at those goddamn limits. But I look at them now, and I can't figure out how I could fail to understand something so obvious.

And thus, the cycle of pain continues, with limits remaining the eternal bane of freshman calculus students.


...And that is the difference between a (good) teacher and a researcher who is conscripted into teaching.

A (good) teacher looks at a problem from as many aspects as possible and works with students to understand them. A (good) teacher collects multiple ways to understand a concept, so if one doesn't sink in, the next may. It's not an inborn skill; it takes practice.

The best way I've ever seen limits explained, incidentally, is taking a student 10 feet away from a wall. Ask the student to go half way to the wall. Then again. Then again. Eventually, the student is taking centimeter steps. Ask the student if he's at the wall. The student says yes or no, but usually, there is an "aha!" moment, the idea being that you can figure out where "there" is without ever technically being "there."

Understanding the basics of the derivative was the first place I banged my head, but that was because I didn't adequately understand the secant. Once I understood that, the derivative fell into place. (In fact, calculus was where I realized how much I didn't understand trigonometry.)


The way I see it is that a teacher may help a pupil to understand something by identifying the pupil's misconceptions. These are unique to the pupil and the teaching process includes to-and-fro questioning aimed at identifying and blasting them away.

(Teaching a group of N pupils is hit and miss, since a teacher can't go in N different directions at once. Instead he explains something in different ways until the common misconceptions have been covered. He doesn't know what these are until he has taught many classes on the topic.)


Indeed, you can explain it to 9-year-olds:

http://carlos.bueno.org/2011/01/tortoise.html


I think a much better way to understand limits for a HS student is coin flips. Say heads = 1 tails = 0.

One flip and you can have 1or 0. Three times it's 1,2/3,1/3,0. Five times it's 1, 4/5,3/5,2/5,1/5,0 As long as you do odds you will never see 1/2 as a option. However, if you divide ever larger odd number of flips you are going to see numbers really really close to 1/2 without ever really getting there.


That's not a simple limit, that's the Law of Large Numbers from probability theory - a more complex beast altogether.


Will the student realize that standing against the wall continually would wield the same limit?


That's what makes guys like Sal Khan special. And why his videos are so valuable.

Sal's "Introduction to Limits": http://www.khanacademy.org/video/introduction-to-limits


"If you can't explain it simply, you don't understand it well enough." -- Albert Einstein


And the tragedy is that nobody else understands the simple explanation once you understand enough to give it.


Yes, true insight requires a level of understanding usually obtained through a focused effort to understand all the components that make up the idea, several levels deep.

Explaining the end result -- the epiphany -- to someone else is often difficult because most will have gaps in their understanding that prevents them from seeing it they way you do and therefore they will have limited ability to verify and understand the significance of what you what you are saying.

See http://jamesthornton.com/blog/how-to-get-to-genius


Well that sounds like the problem of "I learnt it wrong the first time".

Looking back at issues I've had, learning it right the first time has been very important. If I've skimmed over some concept, and learned enough to apply it simply, when it comes to applying it more generally, I'm struggling. Going back and relearning it correctly takes time, and generally has to be done more than once.

Teachers should be careful to explain things simply, but ensure that the understanding really is correct. This can't really be assessed by exercise sheets, but discussion.

(Well it can by exercise sheets, by asking students to apply the concept to a different problem, but personal experience is only a few people make the jump without help. And when they get the help, that short circuits the learning that should have happened to allow them to do it themselves.)


The problem with limits is simple. We force people to learn multiple concepts in a poor order.

1. We introduce limits. Which are blindingly artificial, and so it is hard for us to form a concept about why people want this.

2. We introduce the definition of the derivative. Which instantly results in a practical reason for limits. But typically we make the mistake of introducing the derivative as a function, which means considering the limit of a lot of things at a lot of points.

There really needs to be an intermediate step, which is to introduce the definition of the tangent line. And how to calculate it. Then work up how to calculate tangent lines of various combinations of functions at a single point. And after the person is well and truly comfortable with that, then introduce the derivative.


I'm pretty sure that your last paragraph is how I was taught it. It does have the problem that you spend an awful lot of time wondering why tangent lines are so important before you find out. (Prior to that I recall wondering why we spent so f'ing much time on "slope". Sometimes I think the standard curriculum would actually be better off if it just admitted that some stuff is only really going to be clear in another year, rather than make up bullshit reasons for caring about the intermediate concepts once it runs out of real reasons.) I'm not sure there exists a serialization of those concepts that doesn't have at least one "bear with me, we'll get to why this is important in about six weeks..."


I've thought before about Knuth's proposal (see http://micromath.wordpress.com/2008/04/14/donald-knuth-calcu... for his description) that Calculus be introduced using Big-O, little-o notation. I once drew up a curriculum based on it, and it seems to avoid the issue that you describe.

In particular the idea of approximation seems inherently useful. All approximations take the form something = approximation + error, so the idea of having a useful language for the error seems useful. Once you have that language, the tangent line is just a particularly useful example. Only after working with tangent lines for a while would you get to the definition of the derivative.

The nicest thing about the approach is that all of the derivative rules are easy to remember for tangent lines. Product rule? Multiply the tangent line approximations, throw away the high order term. The chain rule? Plug one function into the other. And so on.

The only thing that I found did not flow naturally was L'Hopital's Rule. However that rule is a source of confusion for students who don't understand the importance of the precondition that the limits be of the form 0/0 or infinity/infinity. And furthermore all standard cases where it is useful are obvious upon inspection for anyone who knows Big-O/little-o notation. (Don't believe me? Crack open a textbook and look at the problems that you're supposed to use the rule for. They are all obvious.)


Every time someone brings up the order in which subjects are taught I wish there was a directed cyclic graph printed on huge posters in every math classroom, representing the dependencies between different branches of math, starting with counting at the bottom, through arithmetic, algebra, etc. and branching up into a beautiful tree shape. If you want to know why you should learn a particular subject, you just point to its node on the graph and follow the arrows. If you want to know the prerequisites for a subject, follow the arrows in reverse until you get to a node you know.

I lack the resources to make or find such a graph myself, so please let me know if something like this exists, or if you intend to create one. My parents are both teachers, and I'm sure they'd benefit from it.



1. We introduce limits. Which are blindingly artificial, and so it is hard for us to form a concept about why people want this.

I don't see limits as artificial as all. For example, Zeno's Paradox[1] is resolved thanks to the limit of a succession. And the concept of the infinitesimal, which was something I had thought about before I learnt about limits, is another rather intuitive thing that is explained by limits.

Derivatives are only one of the most common uses of limits, they are definetely not the only reason they are useful.

[1]http://en.wikipedia.org/wiki/Zeno%27s_paradoxes#Achilles_and...


The limit concept was introduced by Cauchy but lacked a rigorous definition until Weierstrass came along. In the absence of such a definition, Cauchy committed some infamous errors involving nested limits, e.g. he ascribed properties to continuous functions that actually require the stronger assumption of uniform continuity. Take into account that Cauchy's treatment of calculus was by far the most rigorous for its time, so he was hardly careless in these matters. If everything was as straightforward and obvious as you imply, neither Cauchy's nor Weierstrass's work on the foundations of calculus would have served any meaningful purpose.

The more you learn, the more you come to realize that the nature of the real numbers is deeply mysterious and not to be treated lightly. That's not to say you cannot convey many useful intuitions to students without the somewhat abstruse formal machinery.


That's not to say you cannot convey many useful intuitions to students without the somewhat abstruse formal machinery.

That's more or less what I was trying to say. There are some deeply intuitive concepts for which limits are, at the very least, a very good aproximation. It may occur in some cases that the actual case and the intuition that one has about it doesn't match, but that doesn't mean that the model is arbitrary or artificial.


I'm sorry, but every part of your response betrays a severe lack of understanding of the history.

People understood the solution to Zeno's paradox centuries before we had a rigorous definition of limit. And limiting processes were understood both before, and after, without necessarily referring to limits. (In fact in computer science people prefer the more flexible Big-O/little-o notation for that purpose. Knuth has in fact suggested replacing limits in introductory Calculus with that notation and I'd be quite interested to see how well that would work.)

The idea of infinitesmals were thought up well before limits. In fact the idea of a limit came out of Cauchy's work. Cauchy tried to define an infinitesmal as a function that went to 0. (His idea of an infinitesmal became what we now call a Cauchy sequence.) When limits came they did not explain infinitesmals - they replaced them. It was not until Abraham Robinson came up with non-standard analysis in the 1950s that anyone came up with a sound explanation of infinitesmals and properly explained the connection with limits.

And finally I was trying to explain what things are like from the perspective of the student.)


When I was in high school failing calculus for whatever reason - bad teacher, or Im too slow, whatever - but refusing to accept I could not grasp the subject I stumbled upon Keisler's work on Infinitesimal Calculus. While I did not fully grasp the intricacies of the transfer principle, working with the axioms was easy enough and I was able to learn calculus at a level such that I would find Analysis easier going than most later on. I also learned about hyperreals, the concept of nonstandard extensions, that the reals are a crazy concept before leaving highschool. Some things are so rich you can't help but expand your horizons.

I think the brain must do some sort of cross training - learn python and improve your C++, so that when I came back to limits they seemed intuitive and not as difficult as I remembered.


Your last paragraph is the approach Spivak takes in his wonderful textbook 'Calculus'. One of the best maths textbooks in existence.


It took me about a month to really get limits. Once I did, I started complaining about how horribly unintuitive the definition the class had used was. So a friend asked me how I would define it. After a few tries I had the perfect definition...and it turned out it was word-for-word exactly the same as the one in the book.


The curse of knowledge strikes again. This is one of the reasons teaching is a dedicated profession. Teachers practice more on how to teach a subject, than the subject itself.

I also think it relates to why teachers cannot reach a great depth in any subject they teach. Their students "hold" them back.


I disagree. In my opinion, if you can't explain something to someone else, you don't really understand it.

But that doesn't make the article wrong. While you don't understand something until it seems obvious to you, it does not imply that you understand everything that seems obvious to you.


To teach effectively, you have to get inside other people's heads. That is a skill most people don't ever spend any real time trying to develop. So it's no wonder that they are bad at it. It's not about the state of an individual's understanding, but whether they've bothered to cultivate an entirely different, unrelated skill.


It's a bit of a story, but it's got a point that relates to this.

Back in vanilla WoW, when Naxxramas was new and wonderful, my guild hit Thaddius. He was the big Frankenstein looking guy with lightning shooting everywhere. His big gimmick was polarity. If two people were near each other and had opposite polarity, we'd shock one another to death pretty quickly. However, if you had the same polarity, you'd increase the amount of damage you could do. This meant you wanted to stand next to people with the same polarity while keeping away from people with different polarity. Of course, this changed every 30 seconds, and ~10 people had their polarities changed. If you didn't move fast enough, you'd shock people. And with 40 people in the raid, all it took was one person to be slow on their feet and they'd shock the entire raid. Of course, this is going on while everyone is trying to maximize DPS and keep the raid healed up.

Anyways, the strategy was simple: positive on the left, and negative on the right. The debuff icons we had were blue for postive and red for negative, and the original idea was people would equate red with right. Association. This work well when people had to get into their initial positions. However, once we were engaged and everyone got to their initial spot, polarity changes were troublesome. People would move as soon as they knew they had to move. However, they had to think about it. The thought process worked like this.

1. Polarity changed happened.

2. Did my polarity change?

3. Yes. What color?

4. Red. What side am I on?

5. Left. I need to move to the right.

6. Move to the right.

That's a lot of work. A second delay, and considering you only had 3-4 seconds to react, coupled with a second of lag, and you'd burn people.

The problem wasn't people reacting. The problem was the thought process. How to get people to react quickly.

I finally caught on to the solution. The problem was thinking too much. What I told people to do was to initially setup on the left or right like normal. However, after that, ignore sides. Instead, follow one simple rule:

Change sides when your polarity changes.

Change when the icon changes.

Simple enough. You don't need to worry if the polarity doesn't change. You don't need to worry about going to the left or the right. You simple move to the opposite side.

The difference was dramatic. The very next attempt, we got him so low we thought we were going to kill him, but someone lagged out and caused a wipe. Regardless, the effect was clear.

Understanding now only the problem, but how to approach that problem and make it easy to learn is important. It's hard to understand where the problem lies with people. I was proud of this. Oh, it was just a game, but it was a clear example of being able to understanding a problem with how other people thought, and being able to come up with a solution.


John von Neumann once told a confused young scientist, "Young man, in mathematics you don't understand things. You just get used to them."

I don't think that's quite true, but developing familiarity with a difficult concept is tantamount to coming to understand it, in my experience.

For me, the process seems to go like this:

1) assemble the disparate pieces of a concept, and try to wrap my head around the novel ones;

2) work out how Piece A relates to B, and B to C, and ... ;

3) forget what the hell Piece X is;

4) repeat steps 1–4 several times;

5) convince myself that all the pieces work together as they should.

And only later, after I've worked with this collection of pieces several times, do I stop seeing the pieces and start seeing something new, the emergent phenomenon of the new concept. And it's not until I've forgotten the details of how Piece X fits into Piece Y, and maybe why we need Piece Z in the first place, that I feel like I really grok this new concept.


I think this way as well. I was told that I am a slow processor of information, by a credible source. Also, that I would have a harder time learning on the fly but would have a deeper understanding due to making more cognitive links to each idea, and this would allow for tackling harder problems and long projects. It has turned out to be spot on, and if I would have to guess, I would say about a quarter of engineers that I know seem to think this way. In the general population it seems much lower.


I recall a section in Halliday and Resnick freshman Physics text where the authors advise to 'do exercises until you convince yourself that the solution method is correct'.


Brilliant! Here's my similar, now animated take on it:

Obvious to you. Amazing to others.

http://www.youtube.com/watch?v=-GCm-u_vlaQ


I don't necessarily disagree, but it's worth pointing out that this might not always be the case:

"Every mathematician worthy of the name has experienced the state of lucid exaltation in which one thought succeeds another as if miraculously. This feeling may last for hours at a time, even for days. Once you have experienced it, you are eager to repeat it but unable to do it at will, unless perhaps by dogged work." -André Weil


I think this has to do with the structure of the brain. When you are learning something new, the brain is creating new routes and making new connections. It can even creates new neurons (something known as neuro-plasticity).

My guess is that when you finally understood the subject, your brain has completed the route for that particular problem. It's now clear how to solve the problem for the brain, since a specific route exists. It's obvious and very easy for the brain to do that with the route shortcut.

For you, the problem becomes obvious and simple. It's no longer complicated and requires less brain-processing-power to solve. You think you were stupid you didn't figure out that from the beginning, but you are not!


It would be lovely if this were true, but I think it's actually a fairly poisonous mindset. What the author's post seems actually to be saying, with which I heartily agree, is “obviousness is only in hindsight”; deep things become trivial only because of the time you've put into understanding them.

On the other hand, I think the things that become obvious once you've understood them tend to be (though are not always) the things that have already been fairly well digested by others, and so are presented to you in a smooth, flowing way to which you just have to accustom yourself.

My experience with (mathematical) research is that understanding has a roughly equal possibility of meaning that you find it trivial, or that you finally understand all the (apparently) irreducibly complex difficulties. Indeed, my feeling if anyone tells me that, say, Deligne–Lusztig representations are obvious is that he or she hasn't probably fully understood them (disclaimer: neither have I, not even close, which may mean that I'm just illustrating the author's point).

I don't mean at all by this that you shouldn't go on searching for the 'obvious' simplification—that way lies great insight. (As someone said much more elegantly, if things aren't already obvious in mathematics, we tend to make them obvious by changing the definitions.) What I mean is that you shouldn't drag yourself down by saying “I thought I understood that, but it's hard, not obvious!”


Corollary: if you are angry at something, it's because you don't understand it. Nobody gets angry at obvious things such as water being wet.

Another corollary: understanding brings peace.


Yes, this is something I try to remind myself whenever I find myself getting angry. I find it helps tremendously; I quickly realize that it's something beyond my control or something that I don't completely understand and I find that calming. I think I've gotten better about just skipping over the part where you worry about things that are outside of your control (not that it doesn't still happen).


This is precisely why I get nervous about offering to speak at professional groups / conferences. When I see a CFP or similar, everything cool that I do seems obvious and I don't think it'll be useful.

But then I get into conversations with my peers and realize that some of the stuff I do apparently isn't as obvious to everyone as I'd thought.


I think a better measure of understanding is how well you are able to explain it to a layperson. As Einstein once said, "If you can't explain it to a six year old, you don't understand it yourself."

As a researcher, who has also TAed classes, it takes a lot of effort not only to teach a particular topic, but also simultaneously present the crux of the idea as well as the right way to think about it and its place in the context of other information. And learning to think about everything with these in mind will make you understand things much much better. In particular, one of the most challenging things is not to explain breakthrough ideas to cutting-edge researchers but rather explaining breakthrough ideas to complete outsiders and laypeople.


Einstein may have said that but it's just not true. How do you explain the Higgs boson to a six-year-old? If you succeed, it's because you made an analogy so simple that it has no meaning.


Sure you can handwave certain parts of your explanation, but if you don't understand the material yourself then simplifying it accurately is very hard.


Maybe that's because we don't understand it fully?


There are plenty of things we understand fully (e.g. translation lookaside buffers) that are impossible to explain to a six-year-old.


Off the cuff, I would come up with something involving parking valets and the pegboard for keys. It would illustrate the important bits of the mechanism and its purpose. I don't know if you would dismiss that as meaningless analogy, but explaining TLB (or Traveling Salesman, or functional decomposition, etc) to kids is not impossible.


There are things in mathematics that I find difficult to explain even to someone who finished freshman calculus and linear algebra courses. How should one explain singular homology to someone without strong background in topology, algebra and pure math itself? Same thing with theory of sheaves, for instance. These things are not terribly difficult themselves (but they're no easy, I admit), but the amount of time required to explain what they're really about is breathtaking.


I don't know anything about those so I can't say. Not everything has a facile real-world analogy, and there may be long chains of dependent concepts to get through first.

Here's the thing, though: if you make an honest effort to explain something like that to a lay audience, you may fail. Or you may not. Or you'll give them a workable, but incomplete and strictly wrong idea. Either way you yourself will end up with a deeper understanding.

A huge part of "real" mathematics is finding isomorphisms. Teaching is more or less finding isomorphisms between new concepts and concepts that the student already has.


This is partially why it can be frustrating to work with someone who isn't as good as you are. Things that come easily to you doesn't for others. You sometimes have to remind yourself that things that you consider obvious is only obvious for you.


This reminds me of a story from Feynman where he was with a math friend and they were trying to multi-task. Yet, they ended up going about the two things (counting for a minute while reading or speaking) in completely different ways. (Different enough that one could only count and talk while the other could only count and read.)

http://www.youtube.com/watch?v=Cj4y0EUlU-Y&feature=playe...


Richard Feynman said you don't really understand something unless you can explain it to freshmen in a single lecture. Same concept, really.

The interesting consideration, to me, is how much we can get done with hardly any understanding at all. That's human nature. We do things, then we wonder how, and then we eventually come to understand what the hell we just did. It seems bass-ackwards, but that's the way it goes most of the time.


Thought: my linear algebra teach this past semester told me that some of the greatest mistakes in mathematics came from people assuming something was obvious without a proof. After building a huge intricate structure around the assumption, they would watch everything fall apart as it turned out the obvious fact wasn't true. Now, whenever I hear somebody say "it's obvious" I always get skeptical now.


You are just describing what motivated Bertrand Russell to write Principia Mathematica.


There is a great line from Jonathan Ive in the documentary Objectified where he says: "A lot of what we seem to be doing is getting design out of the way. And I think when forms develop with that sort of reason, and they’re not just arbitrary shapes, it feels almost inevitable. It feels almost undesigned. It feels almost like, 'well, of course it’s that way. Why would it be any other way?'"


As Galileo said, "All truths are easy to understand once they are discovered; the point is to discover them." Insights are often so simple, so obviously true once you see them. But we are often blinded by our fixations, our hubris, our limited frames of reference -- our narrow perspectives -- which prevent us from seeing.


The intersection of "things I thought were obvious at one point in time" and "things I currently think are wrong" is a rather large set. It's hard for me to claim I "understood" those thing I now think I was wrong about.


I was going to point out that, ironically, many people who think something is "just obvious" completely fail to understand it at all.

So yes it cuts both ways.


Not sure about this. I understand the proofs that there is no largest prime number, and that the integers can't be put into a one-to-one correspondence with the reals, but I wouldn't say either is "obvious".


Funny you picked those examples, because I recall them both seeming quite obvious to me when I learned them. And I'm no great mathematician.

It's, um, obviously quite hard to explain why you find something obvious, but I think in the first case it's because you can just think about rescaling the number line (using a bigger unit) or shifting the origin, which to me makes it clear that bigness is completely arbitrary and implies that there can't possibly be any biggest number in this sense; in the second case, there clearly is no "next" number in the reals as there is in the naturals, and that insight gets right at the heart of the concept of countability.


in the second case, there clearly is no "next" number in the reals as there is in the naturals, and that insight gets right at the heart of the concept of countability.

I think you missed the point. There is no "next number" concept in the rationals either, and yet they're countable. There is "next number" concept in the first uncountable ordinal, and, as the name suggests, it is uncountable.


You're right. The "next number" test (under the natural ordering) makes the concept of density obvious, not countability.

So I think I've given credence to the notion that those who claim to find something obvious might not fully understand it. Or at least not at four in the morning.

(Oh, and a point that Stephen Abbott made in his wonderful intro analysis book now springs to mind: proofs are useful also as a check on our intuition.)


The second proof you mention is the most controversial proof in the history of mathematics. Some mathematicians refuse to believe it.


I am a mathematics postdoc at Stanford, constantly going to conferences and meeting other mathematicians, and I have not met anyone who refuses to believe this. Indeed, I have met no one who refuses to believe any proof that is widely accepted by the mathematical community.

You could (presumably) construct axiom systems under which this is not true, and I believe that some logicians do this sort of thing, but this is rather far removed from mainstream research mathematics.


Diagonalization is a poor example -- though I don't think Cantor's logic was widely accepted until the early 20th century -- but I think there are examples of proofs where mathematicians will accept the logic, but argue with it none the less:

The Banach-Tarski paradox comes to mind first. I don't think anyone argues that the proof is wrong, but you'll easily find people to argue that that very fact means that the full axiom of choice should be held in deep suspicion.

The 4-color map coloring theorem is more interesting in this thread, though; I think it's the best known example of a proof that offers little to no insight into why the theorem is true. I don't think any solution that requires breaking a problem into 1,936 special cases, and then mechanically checking each one of those cases, will ever lead to an understanding that makes the theorem "obvious".


I think once you understand the proof of 5 color theorem and believe the fact that the proof of 4 color theorem is more or less the same, but more hardcore, then it is as much insight as you can get.

Looking for insight is what many people do in mathematics, but nonetheless believing that every true fact is true for a reason is naive, just as naive as believing that every problem has to have a solution. So far, most of the time we succeed in explaining why true things are true, but it may be the case that some facts are true merely by combinatorics, and not for human-imaginable reason.

Sure, I would like to believe that there is a crucial observation we haven't made yet that makes 4 color theorem easy, just as it is for 6 color theorem. Nevertheless, I accept the fact that there may be no such thing and 4 color theorem holds just because the constraints force it to be.


I didn't mean to defend the idea that every fact is true for a reason.

I remember reading something of Chaitin's a couple of years ago that drove home the point that most mathematical facts are, in some fairly-well defined sense, true for no reason at all.

(Alas, I don't remember the argument well enough to re-cap it here.)


The axiom of choice also allows you to prove the existence of a strategy that allows near perfect predictive accuracy given a function and a set of all past points. The reals themselves admit many far out properties such as virtually all reals being "inaccessible" or the existence of a know it all number.

No, the problem that those who dislike the axiom of choice have is not inherent with the axiom but more the law of excluded middle. Constructivism has no rooms for such imaginings hence those who follow it are not happy with existential proofs. This more practical viewpoint will prove more profound IMHO just as bayesianism is winning the day. Just too many things connected.. types and terms in programs, intuitionistic logic, parts of physics, topoi, CCC.


You could apply the Lowenheim-Skolem theorem to the ZFC theory to obtain a countable model of set theory, but the object representing the set of real numbers in this model would still be "uncountable" inside of the model, altough clearly countable for us outside of it.


I guess the point can be summarized like

- When someone says that something is obvious, we cannot claim that he/she actually understands it well.

- But, if someone understands something well, it will seem obvious to him/her.


Yet, we might not understand the things that seem obvious. Gravity, light etc can be "obvious" to most people, but not really understood.


You don't understand something until you can explain it to your grand mother.


In that case I don't even understand English... ;)


This is why so many people think patents are unfair - because they are so obvious. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: