Hacker News new | past | comments | ask | show | jobs | submit login
Kill Math (worrydream.com)
199 points by noonespecial on Feb 25, 2012 | hide | past | favorite | 85 comments



Something that I think is important in math is the practical and theoretical going hand-in-hand. I'm still an undergrad student, but I have really enjoyed my math education so far. I have always appreciated both the beauty of the abstraction of math, as well as the beauty of applying that abstraction to solve very real problems.

That being said, I think something might be lost when you take away the abstraction of math. I think a lot of math -and math abstraction especially- is difficult because of the way variables are understood. Since the beginning of algebra one, we learn that we can have an unknown, say 'x', and we can solve for it trivially by manipulating an equation to set 'x' equal to some value. But people always seem to get caught up in the naming idea. I remember in high school math classes, many of my peers would get so hung up as soon as the 'x' was turned into a 'y'. I felt like it was hopeless trying to explain that 'x' and 'y' are just names for something.

This same problem percolates into every bit of math. After enough time, functions in math become first class citizens as well. That throws even more people. The exact same problem is seen in the programming world when people move from functions as an idea of a subroutine to being treated like a first class citizen. And lets not even mention Haskell or another functional language, with currying and stuff like that.

I guess that's the problem that I see in math; the idea of an abstraction is difficult. However, I'm not totally convinced that making complex math more concrete is the best way to do that. At some point, having linked numbers like in interface builder looses its usefulness. So much of math is proofs. How can you do proofs when you are constantly instantiating concepts, rather than dealing with an abstraction. At some point, "there exists" needs to become "for all," which I think might be difficult in this type of mathematical environment.

Again, concrete and abstraction really go best hand-in-hand, in my opinion.


This article isn't necessarily about making ALL math less abstract / challenging. That's arguably impossible. It's about making math more accessible to the masses.

Most college educated Americans have never learned ANYTHING about differential equations, linear algebra, or discrete math... That is a serious shame and you could argue impacts our national productivity/potential. Most people simply will not try to learn higher level math out of fear of failure, challenge, or whatever.

Making SOME higher level math more intuitive is a great objective. I would much rather live in a world where more Americans ultimately understand a higher level of math (regardless if the path to get there was a little less challenging) than leaving math education in it's current state.

Analogy: How many average American's tried to use computers with just command prompts? I would argue that it was the creation of an intuitive interface ("physical" folders where you store documents) that really started mass adoption of computers throughout the world. Is this less challenging/abstract than command prompts- yes. Is the world vastly more productive because of it? Absolutely.


I'm not sure how I feel about this. Sometimes complicated things are just complicated. I'm not sure making pretty pictures to make things appear more simple than they are is necessarily a good thing. Abstract thinking is difficult. Breaking the problems down such that it no longer requires abstract thought somewhat defeats the purpose.


Don't delude yourself with this Protestant hard-work-is-necessary mentality. Better representations are possible and they can save us a lot of work. It's easier to perform multiplication using Hindu-Arab numerals than it is with Roman numerals. It's easier to program with Python than Assembly. Difficulty is a property of the representation, not of some underlying 'abstract thing' that is being represented.


That's not my meaning.

Higher level math requires abstract thinking. Abstract thinking is hard. If you can learn to think abstractly then it will allow (or help) you solve a large class of abstract problems. If math is broken down such that it does not require abstract thinking and a person does not learn to think abstractly then they will not be able to solve that large class of abstract problems.

Or so my line of likely incorrect train of thought goes.


Or maybe interactive visualizations can help people learn to think abstractly, by showing how one level of abstraction relates to another?

http://worrydream.com/#!/LadderOfAbstraction


Correction: Difficulty is not necessarily a property of a problem, but can also be caused by a poor choice of representation.

I agree that modern technology may help provide better representations, but in the case of most of abstract mathematics, I am not convinced it currently does, or even that it can. There is some truth in the statement "a picture lies more than a thousand words"


Difficulty is a property of the representation

Not necessarily. There are such things as inherent complexity and accidental complexity. forrestthewoods's point was that some things are inherently complex, which I agree with. Accidental complexity happens when we use a poor representation - such as roman numerals.


Reminds me of the thesis behind the Learn Python/C/etc. The Hard Way series. By abstracting away the abstraction and complexity we may be turning out people who are very comfortable with the specific task but, due to a lack of familiarity with the primitives underlying what they're doing, are incapable at generalising it to even slightly different problems.

There is another set of "abstract symbols" we have a "freakish knack" for manipulating: the alphabet. Literacy was once, too, deemed beyond the reach of commoners.


Two things:

1. Needless complexity 2. Needed complexity

1.: Think how programming in assembler requires you to mentally keep track of registers, and how higher level programming languages did away with that by enabling variable names. The resulting program is still mostly the same thing underneath. You can get the same logical results but the former is much harder.

There is a question here though: why is it easier for most humans to program in C rather than in assembler? Sure, there's more housekeeping in assembler. But I think it's mostly because there's one less layer of representation. a=5; b=10; c=a+b; Vs "load 5 into register A1", "load 10 into register A2", "calculate sum from registers A1 and A2 and store it to register A3".

So here in "less layered math", instead of saying: Let alpha represent the angle, a the closer cathete and b the further cathete and c the hypotenuse, then sin(alpha) = b/c You instead say: sin is the relation of the further cathete to the hypotenuse.

Again, a large portion of people can get the former way of explaining sin, perhaps when they see it all at once on the blackboard or book and go back and forth... (what was b again?) but the latter way of explaining does away with the whole exercise of variables. (Note that I picked a subject of convention, not something that can be deduced from more basic principles, I think those again can be taught somewhat differently.)

The problem is, many math teacher love variables and think how beautiful all that symphony of x:s and y:s is. But for the average school student math is just one subject among others. They just want the essentials with the least amount of extra crap and layers on top.

Lots of caveats here. I think people need to learn basic algebra, but it makes me sad that so few people can't use it for problems they are trying to solve. I think these would make interesting psychology research subjects. It's a very important field.

2.: Yet you can't go beyond some point in simplification.


Have you seen this Bret's talk about his principle http://vimeo.com/36579366 ? It's not really about simplification of things.

The principle is basically "to see what's going on", or "to have immediate connection with what you are doing". Basically when you design something, you have to try to think or simulate in your head what's going on. Bret wants to remove that barrier so you don't have to think.

There's some great examples in the video where the principle is applied to graphics, game design, electronics, programming etc.


I am hopeless at math. This is something that I am ashamed of. But there are people who seem to be proud of their ignorance; they're shocked if you haven't read any Shakespeare but happily admit they can't do percentages.

Will those people be helped by the author's approach? I don't know, but I don't think so. These people will see a number and throw their hands up, saying "Oh maths! I can't do sudoku, how do you expect me to do this!". This attitude is not quite as prevalent in the UK as it used to be, but it's still there. See, for example, the number of esoteric arts programmes on BBC compared to the number of advanced science programmes. (I'm not aware of any science programming that would be beyond an enthusiastic 14 year old. I do know of hours of arts programming that is unashamedly elitist. Elitist is fine, but it'd be great for some balance.)

What is needed is better maths education. (I finished school many years ago; maybe things are different now.) Math is not blindly mashing numbers and symbols and hoping for the best. Maths includes a large element of careful thinking, exploring the problem, listing the known information, listing what you want to find.

New techniques for using math would help reduce the gender inequality in math results too.

Put the normal "more research needed" caveats around this, but: There's some suggestion that girls use inefficient techniques and "just struggle through", they manage to get correct results and so they don't get extra help. Boys tend to just stop when it gets too laborious, and thus they get taught new better techniques.

(http://news.bbc.co.uk/1/hi/education/4587466.stm)


This is an interesting idea, and I think his final analogy summed it up perfectly: symbolic math is like a command line. But I reject his assumption that a command line is a bad interface. And that mirrors my thoughts on this post: for the layperson, a GUI may indeed be better than a command line; for a professional (a programmer) the reverse is true.

A command line lets you combine and recombine different programs and easily do many things its creators never dreamed of. Symbolic manipulation and algebra are just like that!

Let's imagine you know how to differentiate viscerally; you understand what a tangent line looks like and how to plot that and you care not for silly equations like d/dx x^3 = 3x^2. Naturally, you understand basic arithmetic in the same way and not as mechanical manipulations on symbols. You are perfectly well equipped to deal with real problems, and perhaps find it easier than shuffling symbols around. But you would never come up with a way to get the derivative of, say, an algebraic data type[1][2]. (For the curious, this is how you can define a zipper on a type.)

[1]: http://blog.lab49.com/archives/3011 [2]: http://strictlypositive.org/diff.pdf

And that's the problem really: mechanical manipulation of symbols lets you divorce the mathematical idea from the underlying "reality". It lets you generalize patterns with complete disregard for what they mean. And, somehow, this produces indubitably useful, nontrivial results. That's the real magic of math, and that's the magic that you don't get in your high school courses. (I think linear algebra is the first subject like this, but I'm just learning it now.)


Also, I think a point that could be made that if you were to teach a 'layperson' how to use a command line, given they were willing to learn it -and not pull the "I'm too dumb, this is for you computer people" card- I think they would quickly discover that there are few things they do daily that they could do quicker.


The problem is that our education systems, by and large, include effectively no education about computers whatsoever. (Classes with touch-typing lessons/word-processing skills/etc. don't count in my mind.)

As a result, most people have no concept of how a computer really works. With just the tiniest bit of this knowledge, the command-line would seem much less arcane, and the 'I'm too dumb, this is for you computer people' argument wouldn't really apply.

It's shocking how computing is quite literally the technological advance of the millennium, both for industry and for individuals, and yet we somehow think it's acceptable to leave it up to people to learn about it all on their own, and even then only if they care to.


yes. Also, people are so thrown by complexity, that they assume they aren't capable of taking steps toward figuring something out. a CLI really is NOT that hard. They all follow a very simple pattern:

[command] [parameters]

$ ls

just a one word command. shows you directory contents. easy. not complicated looking. Lets make it harder.

$ ls -a

an option. list contents of directory. but ALL of them. still easy.

building from the ground up is easy. For some reason, people still are scared off. Why? Why are people afraid to learn?


It's easy to sight of just how much things you must know just to use the command line. When I read you comment I wondered what other options were available, so I tried...

$ls -h

This is what I've seen with most command line tools, but it doesn't work. So then I tried...

$ls --help

The output is

> ls: illegal option -- - > usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]

What? I had to google it. Ok so...

$man ls

Oh, cool, a list of the options, finaly. Now, let's see what ls -a does:

> -a Include directory entries whose names begin with a dot (.).

If I were looking for invisible files I'd have absolutely no idea this is the option I need to use.

Now, I completely understand that the power of the command line relies on its consistency, I'm confident that—at some point—I'm going to be able to grasp it's elegance. But, for now, I feel like a foreigner in a new country who just knows a few phrases and struggles greatly to understand what is being said to me. Saying that the command line is easy is like saying that English is easy.


> Saying that the command line is easy is like saying that English is easy.

Well, right now it's as if we didn't teach English (or, rather, any foreign language) in schools and then somehow 'magically' expected people to pick up the single most important foreign language for communicating in modern life.

No, it's not easy, but the basics are really not hard to learn if started early and kept up consistently. Just like any other skill that we teach in schools.


But that's only if you try to learn by prodding and poking it. When I started learning English, I didn't begin by reading the definitions of a word in a dictionary. I read manuals that explained the basics in a way that didn't assume prior knowledge.

In the same way, searching for "how can I learn to use the command line" gives you a bunch of guides - of varying quality, of course - that explain the basic commands in a clearer way.


I think Bret may be targeting laypeople from what I've seen, so he will always come down on the side of enabling growth for larger numbers of people, as opposed to an 'elite' few (comparatively).


FTA: "By comparison, consider literacy. The ability to receive thoughts from a person who is not at the same place or time is a similarly great power. The dramatic social consequences of the rise of literacy are well known."

I'm thinking it might be a bit on the difficult side to express the relationships written equations make easy if all you've got is an animated graph full of dials or a "scrubbing calculator".

Written math is wonderfully dense and meaningful and allows the transmission of a very particular kind of knowledge with the simplest of mediums.


That's a good point. Sort of like when I first discovered regular expressions, I thought "Wow, I can express a whole sentence worth of very specific commands in a few characters." The same goes for Math. I don't really like Math, but I certainly won't deny its power and ability to be expressed in such a compact form.


I'm worried that in presenting math with visible, tangible intuition dominant will do more to silo the knowledge inside the ivory tower.

N-variable calculus isn't something that you can draw a picture of. Lots of (very practical) Linear Algebra is derived from the pure "Linear Transformation" interpretation rather than the "Matrix" one. Stochastic...anything (seems to be all the rage these days) is not made with helpful pictures.

If we train people to do math in a non-abstract way, they won't be as easily able to grapple with the real problems, which are only approachable in the abstract.


I disagree, at least that pictures can't help with higher dimensional math. Well, regular pictures may not help so much but interaction really can. A lot of my understanding of higher level math depends on my programming ability to "touch" some of the abstract concepts.

When learning something new you need to be able to associate the new concept with things you already understand, and visuals can give you an intuitive sense of how things are behaving (especially in time)

I don't think the visuals could necessarily take the place of training to use the abstract symbols or lines of code any time soon, but I think they are a really necessary step in the right direction to get more people to approach math.


One question: How were those demonstrations programmed? I have a feeling they were developed by "manipulating abstract symbols."

Math doesn't need a new interface, any more than a POSIX compliant shell does. Math is a beautiful and expressive language. Its strength is in communicating difficult, abstract concepts in a language that is manipulable. This manipulation is of the utmost importance in understanding the relationships between different concepts and even in developing new ones. Removing the single strongest aspect --the "abstract symbols"-- of mathematics is to neuter it.


I think part of his goal is to help people that do not yet understand the abstraction have a way to manipulate a system involving it.

Then play can lead to understanding, in a way that is unlikely for 2x+1=5.


The idea of "geometrization" of mathematical education as opposed to "algebraization" has been discussed widely in the recent decade at least, maybe more. In Russian school of mathematics, Vladimir Arnold (http://en.wikipedia.org/wiki/Vladimir_Arnold), was a very strong proponent of the idea. You can read his talk of the subject here http://pauli.uni-muenster.de/~munsteg/arnold.html .


I get the sense that there's two levels of understanding for most mathematics: a superficial level where you just have to manipulate symbols and be comfortable applying rules, and a deeper level where you grasp the underlying relationships and some fundamental 'why' of the problem. I used to think that if you were bad at the first, then the second would be an even tougher challenge. But I'm starting to suspect that actually people who are good at the first often just don't care about the second. And in fact it might be the very need for things to 'make sense' to you on a deeper level that is causing discomfort and difficulty at the superficial level. Some people lack curiosity about the deeper relationships, and perversely this might actually help them get along at the superficial level.

For instance in finance, why is Macaulay duration(1) approximately equal to effective duration? One measures time in years, the other measures sensitivity in percentage form. How the heck do they come out at the same number approximately? For some reason i can't find any literature that's in a rush to explain this relationship... even though the two measures share the name 'duration' so presumably there's some intuitive understanding to be reached.

(1) when the yield is expressed continuously compounding, at least


I had some initial thoughts when I first saw this, I don't think they've changed much since. It'll be interesting to see where it ends up, for sure. I wrote again about this page after reading the somewhat recent Dijkstra lecture about the radical novelty that is programming (and other topics). Here's a slightly modified copy/paste, I'll warn that it kind of wanders after "Things Other People Have Said" so you're invited to stop reading at that point.

While I sympathize with the opening, because many neat things have been made/discovered without the person having any formal math knowledge like what the "U"-looking symbol in an equation stands for:

>"The power to understand and predict the quantities of the world should not be restricted to those with a freakish knack for manipulating abstract symbols."

I heavily disagree with the conclusion:

>"They are unusable in the same way that the UNIX command line is unusable for the vast majority of people. There have been many proposals for how the general public can make more powerful use of computers, but nobody is suggesting we should teach everyone to use the command line. The good proposals are the opposite of that -- design better interfaces, more accessible applications, higher-level abstractions. Represent things visually and tangibly.

>And so it should be with math. Mathematics, as currently practiced, is a command line. We need a better interface."

I think the notion that they're unusable by the vast majority of people because of something fundamental about people is false. At some point in time reading and writing were not done by the vast majority of people, then they came into existence. Even as recent as 500 years ago, the vast majority of the population neither read nor wrote. Along came public education and that proportion flip flopped insanely fast such that the vast majority are capable of reading and writing (regardless of how good they are at it). Reading and writing were just as radical novelties as computing, just because something is a radical novelty doesn't mean most humans can't be proficient at it eventually.

I think we should teach everyone to use the command line. Well, not Windows' CMD.EXE, but bash preferably in a gnome-terminal. Here is a short quote expressing why I think this is a good idea:

>Linux supports the notion of a command line or a shell for the same reason that only children read books with only pictures in them. Language, be it English or something else, is the only tool flexible enough to accomplish a sufficiently broad range of tasks. -- Bill Garrett

I think we should teach everyone how to interact with the computer at the most general way--which means programming. Which means commanding the computer. Describing the inverse square law in terms of pictures and intuition isn't going to make it any more of a tool than some other method, people are still going to think of it as something one is taught. The only way to make it seem like a tool is to use it as a tool, this means programming for some purpose. Maybe a physics simulation. And the beauty of tools is why the software world has exploded with utility despite Dijkstra's depression at programmers' inability to program extremely well. The beauty of tools is that they can be used without understanding the tool, just what the tool is useful for.

The "Things Other People Have Said" at the end is more interesting than the essay.

I wonder what Dijkstra would think of it. My two best guesses are "This is just a continuation of infantizing everything" and "We did kill math, with programming." I think a lot of people's difficulties with symbolic manipulation are due to the symbols not having immediately interpreted meaning. Dijkstra seems to recommend fixing this by drilling symbol manipulation of the predicate calculus with uninterpreted symbols like "black" and "white". My own approach I have been using more and more is to just use longer variable names in my math, whether written or typed. It really seems like this simple step can be a tremendous aid in understanding what's going on.

Over the past couple of years I've realized just how important sheer memorization can be as I see almost everyone around me struggle with basic calculus facts, which means they struggle with application of the facts. The latest example from a few weeks ago in a junior level stats course with calc 2 prerequisite (which many people take 2 or 3 times here apparently) was when apparently no one but me recognized (or bothered to speak up after 20 seconds) that (1+1/x)^x limited to infinity is the definition of e, which we then immediately used with (1 - const/x)^x = e^(-const). (Which is immediately related to the continual compound interest formula that changes (1+r/n)^nt to e^rt as n->infty which everyone should have seen multiple times in algebra 2 / pre calc when learning about logarithms! The two most common examples are money and radioactive decay.)

Sure that's a memorized fact for me, it's not necessarily intuitive apart from the 1/x being related to logs which are related to e, but come on. In my calc 3 course that I didn't get to waive out of, where it was just me and two others, I was the only one who passed, and when the teacher would try to make the class more interactive by asking us to complete-the-step instead of just a pure lecture the other two were seemingly incapable at remembering even the most basic derivative/integral pairs like D[sin(x)]=cos(x). A lot of 'education' is students cramming for and regurgitating on a test where the teacher hopes (often in vain) that the cramming resulted in long-term memorization. I do this too, cram and forget, but maybe I just do it less frequently or for less important subjects or my brain's wired to memorize things more easily than average (but I disfavor hypotheses that suppose unexplained gaps in human ability as largely static). My high school teacher's pre-calc and calc courses had frequent pure memorization tests such that there wasn't a need to cram elsewhere because that iterative cramming was enough to get long-term memorization (with cache warmups every so often; I did have to breeze through a calculus book in preparation for a waiver exam last year).

A person can only hold so many things in their active, conscious memory at once (some people think it's only 7 plus or minus 3 but that seems like too weird a definition of 'thing' that results in so little; human brains are not CPUs with only registers r0 to r6), so when you start looking at a math proof with x's and y's and r's and greek letters and other uninterpreted single-character symbols all over the place, it's incredibly easy to get lost. If you start in the middle, if your brain hasn't memorized that "x means this, y means this" for any value of "this", you have to do a conscious lookup and dereference the symbol to some meaning even if the actual math itself is clear. My control systems book uses C and R for output/input (I already forgot which is which) but it's so embedded in my brain that Y is typically output and X is input that I use those instead, as does the teacher. I agree with Dijkstra that practice at manipulating uninterpreted symbols makes it a bit easier, but there are quickly diminishing returns and ever since I started programming and saw how PHP uses the $var syntax to denote variables I've been thinking "Why hasn't this caught on in math? It makes so many things much clearer!" But it's not so much the "$" in $var (or @var and other sigils in Perl) but the "var". Saving your brain a dereference step is pretty useful.

Single-character symbols (and multiple character symbols) that not only hide tons of meaning through a dereference, which is naturally how language works, that also suggest a meaning themselves, are stupid and promote natural diseased human thinking. My poster-child here is global warming. Every winter you'll hear the same comments on "When's global warming going to hit here?" The poor choice of global warming as a variable that points to a bigger theory has cost it heavily in PR because humans look at only the phrase instead of what it points to, it's still so bad that people's every day experiences, which they use to form a subconscious prior probability in global warming as a theory, contain "it's really hot all the time and everywhere I've been!" and so they look at "global warming" and dismiss any evidence for/against it based purely on the name and how their experience seems to contradict the name. It's like a computer thinking that a pointer address that happens to correspond to 0xADD means it should invoke addition when it should figure out what 0xADD points to instead which could be anything.

Another use of long names is with Bayes' Theorem in probability. It is only through memorization of the uninterpreted symbols A,B,C themselves that I remember prob(A | B, C) = prob(A | C) * prob(B | A, C) / prob(B | C) But it never made intuitive sense to me and I never bothered memorizing until it was expressed as prob(hypothesis | data, bg_info) = prob(hypothesis | bg_info) * prob(data | hypothesis, bg_info) / prob(data | bg_info). (Sometimes data is replaced by model.) The notion that it relates reason with reality, that it encapsulates the process of learning and scientific reasoning, elevates the equation to the status of a tool instead of something you cram for and regurgitate on a test. An immediately useful application for the tool is using Naive Bayes to filter spam.


> (1+1/x)^x limited to infinity is the definition of e

This shouldn’t be “memorized” as a “fact” though. As you point out, it’s the very definition of e. Which is to say, to really understand what the exponential function means implies building up a mental model about lim [n→∞] (1 + x/n)^n and its behavior, interacting with it, connecting it to other functions, seeing what happens when you combine it with other ideas: trying to integrate/differentiate it; noticing how it reacts to fourier transform; relating it to rotations, areas, imaginary numbers; writing it as a taylor expansion or continued fraction, or with a functional equation, or as the solution to a differential equation. Connecting it to derived functions such as e.g. the normal distribution, or sine, or hyperbolic sine. Generalizing it to an operation on complex numbers, or quaternions, or matrices. Thinking about what exponentiation in general really means. Coming up with algorithms for computing e’s decimal expansion or proving that e is irrational and transcendental. Solving problems with it, like to start with, continuously compounded interest (&c. &c. &c.).

No one who had really learned about exponentials in a deep way would easily forget that this is the definition of e, and that has nothing to do with lists of facts or rote memorization.

> Single-character symbols (and multiple character symbols) that not only hide tons of meaning through a dereference, which is naturally how language works, that also suggest a meaning themselves, are stupid and promote natural diseased human thinking.

I think you should try writing out some very difficult complex math proofs before you make this assertion (say, for instance, a proof . Things are bad enough when we pack ideas down. Start expanding them in the middle of the computation, and it the steps become almost impossible to see or reason about.

The whole point of assigning things short simple names is that it gives a clear analogy (to past experience w/ the conventions) that provide a shortcut to anticipating how particular types of structures and operations behave. Cramming a matrix or a multivariate function or an operator down into a single symbol helps us to treat that symbol as a black box for the purpose of the proof or problem, which helps us bring our existing mathematical tools and understandings to bear. Sometimes, we run afoul of false impressions, as when we apply intuition about the real numbers to matrices in ways that don’t quite apply, or intuition about metric spaces to general topological spaces, &c. But this is I think an unavoidable cost, and it’s why we strive to be careful and precise in making mathematical arguments.


The way people think and conceptualize ideas varies greatly for different people. Even for things as basic as counting, people do it in different ways.

http://mdzlog.alcor.net/2010/07/12/read-listen-or-comprehend...

> What’s interesting is that the pattern varies from person to person. Feynman shared his discovery with a group of people, one of whom (John Tukey) had a curiously different experience: while counting steadily, he could easily speak aloud, but could not read. Through experimenting and comparing their experiences, it seemed to them that they were using different cognitive processes to accomplish the task of counting time. Feynman was “hearing” the numbers in his head, while Tukey was “seeing” the numbers go by.

So while pushing around abstract symbols might be the way that you really grok things, I don't think this applies to everyone. I have an undergrad math minor, and the only math class that I didn't get an A in and that I really struggled with was Linear Algebra.

I think it mostly had to due with excessive symbolic abstraction. Matrix notation still really troubles me. About 5 years after that Linear Algebra class that I struggled with, I was took a computer graphics class for a masters degree. We were talking about the non-invertibility of some operation (I think it was about figuring out how far something was away given a 2d rendering of a 3d scene, where you have a degree of freedom in the scale, everything could be twice as large and twice as far away). And that gave me a really intuitive understanding between matrix invertibility and full rank matrices. I am sure that I would have done much better in a combined Linear Algebra/Computer Graphics class than I did in just the mostly symbolic and proof based Linear Algebra class.

So be careful in believing that the thought processes that obviously work well for you will also work well for others.


Traditional way of teaching linear algebra is very abstract. But that's ok, or at least we use linear algebra in our CS programme as a way to teach abstract thinking, which is important for this subject.

I would suggest that someone who has trouble (serious trouble that is) with linear algebra might have also trouble with other abstract things in computer science, like automata theory or grammars and compilers.


I had the same experience, i struggle understanding problems where i can't see concrete examples of cause and effect. Though I believe the problem was that I simply didn't "bother" to understand because it was "hard".


Single symbols are ok, but why do they have to be denoted by a single letter? There's a reason why using single-letter (or two, or three-letter) variables is discouraged in programming in favour of longer, descriptive names. It's to avoid the moments in which you start wondering, in the middle of reading a function, 'why q has to be greater than k-1? And what the hell is this Q(p) anyway?'. I really don't understand why both math and physics are so into one-character symbols. Even the OP's Bayes theorem example shows that it's easier to read a formula, when the symbol names self-evaluate to their meaning.


A lot of the time when doing math, you literally don't want to know what a particular symbol stands for - you just want to manipulate the symbols abstractly. Too much interpretation can interfere with the process of pattern recognition that is essential for doing mathematics well. You also see this in programming whenever someone writes

    public interface List<T> { ... }
in Java, or when you write

    Tree a = Empty | Branch a (Tree a) (Tree a)
in Haskell. It doesn't matter what 'T' and 'a' are, so we use short, one-character representations for them. The fact that you can write Bayes rule as P(A|B) = P(A)P(B|A)/P(B) (where I've even used a zero-character representation for the background information!) expresses the fact that A and B can be arbitrary events, and don't need to have any connection to hypotheses, models or data. It just happens one of the applications of Bayes rule is in fitting scientific hypotheses to data.

This question at math.stackexchange.com goes into a little more detail: http://math.stackexchange.com/questions/24241/why-do-mathema...


Even in your example, multi-character names are used for Branch, Empty, Tree, and List. And those are much more helpful than single-character names would be.

Plus, the math tradition of one-character variable names means that they've had to adopt several different alphabets just to get enough identifier uniqueness (greek, hebrew, etc., plus specialized symbols like the real, natural, and integer number set symbols). Which makes all that stuff a pain to type. And even then, there are still identifier collisions where different sub-disciplines have different conventions for the meaning of a particular character.

It's also annoying because single-character names are impossible to google for.


We would use multi-character names for Branch, Empty and Tree because it matters what those things represent. It would be thoroughly confusing if we instead wrote

    t a = e | b a (t a) (t a)
However, we don't care what the 'a' represents. It's just a placeholder for an arbitrary piece of information. If we had to write

    Tree someData = Empty | Branch someData (Tree someData) (Tree someData)
then we have just introduced a lot of unnecessary line noise.

One difference between programming and mathematics is that programming is mostly interpreted in context, when it matters that this double represents elapsed time, and this double represents dollars in my checking account. Mathematics, on the other hand, is mostly interpreted out of context. I don't care what a represents, all I care about is that it enjoys the relationship ab = ba with some other arbitrary object b.

If the mantra of programming is "names are important" then the mantra of mathematics might be "names aren't important".


Sure, your original example with single-letter type variables makes sense to me, since those variables could represent anything. I never meant to object to those. I just wanted to point out the fact that your example also included multi-letter names, while mathematics generally does not.

So if you really don't care what a variable represents, then I'd agree that a single-letter name is fine. Given that math is almost universally done with single-letter variable names, are you suggesting that in math you almost never care what a variable represents? This wikipedia article makes me think otherwise; clearly, variables often have a fairly specific meaning.

http://en.wikipedia.org/wiki/Greek_letters_used_in_mathemati...


Ok, this is a good reason, though I'm don't think it covers all the use cases, in particular on the borderline between math and physics. But yes, I can see how in programming, knowing what the variable really represents is usually helpful, while in math might usually be not.

Thanks for the link, there are some good thoughts there.


Part of the reason why single-letter variable names persist may be due to multiplication. There is no good symbol for multiplication; the two most common ones, × and ∙ are easily mistaken for other symbols (x and . respectively) when improperly typeset or handwritten (the distinction is easier with proper typography and neater handwriting, of course).

The way most mathematicians ignore this problem is by throwing two variable names next to each other, so xy is x times y. Obviously, this system is incompatible with multiple-letter variable names. I have unfortunately seen the two concepts mixed, which makes reading an equation like reading ancient Greek or Latin (which lacked spaces and punctuation).

Until there is a commonly accepted, easily distinguishable symbol for multiplication, I don't see single-letter variables going anywhere, or multiple-letter variables being used in general mathematics.


Short variable names are useful when structure is more important than concrete interpretation. For example, consider the above definition of e (aka EulersConstant): e = lim x->inf (1 - 1/x)^x = limit( compoundingSegmentsPerYear, INFINITY, pow(1 - 1 / compoundingSegmentsPerYear, compoundingSegmentsPerYear))

We could obfuscate further by XML replacement parentheses: <sum> <term>1</term> <term> <quotient>...</quotient> </term> </sum> But the structure is more important, right?


I must disagree with the command-line part. I don't think that the command line is fundamentally more powerful than any other interface.

Why is the command line powerful? Because it offers a large number of utilities that are highly configurable and that can be linked easily.

But you could have just the same expressiveness if the interface was eg. a circuit diagram, where you connect configurable commands with lines.

You know why the command line uses text input? Because it is the simplest to implement. The only people who need to know how to use the command line are people who need to use software where it doesn't pay off to make a more intuitive interface.


While I agree that the commandline provides "a large number of utilities" that "can be linked easily", I don't think that's the whole story. While you could certainly design a circuit-diagram-style GUI for building commands (so-called "visual programming"), it would be a lot more tedious than typing, just because there's so much more bandwidth available on a 100+ button, two-handed input device than a two-or-three button one-handed input device. Also, a good deal of efficiency comes from terseness: I can imagine a GUI that would make it simple and visually obvious how the different atoms of a regular expression fit together, but such a GUI would spend a lot of visual bandwidth communicating which combinations are legal and which are absurd. Expressing a regular expression as a string gives you absolutely no such feedback, but if you already know the regex you want to use, it's an awful lot faster to type it.

Lastly, the command line gets a good deal of power from meta-programming: most commands deal with a loosely structured stream of text, and a set of commands is a loosely structured stream of text. Specifically in POSIX environments, primitives like command-substitution ("$()" or backticks) and the xargs command are powerful ways to write commands that construct and execute other commands. If your diagram-based UI contains a set of primitives for launching commands and working with text, you're going to have to add a whole new set of primitives for working with diagrams.


As somebody who spent some time working with LabVIEW, I can safely say that drawing a circuit diagram is more difficult and less intuitive than using text. Now, maybe this is the fault of LabVIEW, but I think it's true in the general case as well.


I once had to use LabVIEW. In general, I agree with you, but I got the feeling that G (the language implemented in LabVIEW) might almost be a decent language if the only IDE for it (and the file format) didn't suck so badly.


I remember Bayes as P(a|b) p(b) = p(a,b) = p(b|a) p(a). It's hardly the only way it's linked up in my personal mind-map, of course, but it's how I first memorized it.

I only skimmed this comment, I'm afraid.


This is so wrong... The point of math is that ideas can be generalized and re-used in previously unexpected places -- your ideas remind me Middle Ages where people were learning a mnemonic poems to remember "law of proportion", treating it like a precious magic gizmo and were perfectly sure they wouldn't be able to recreate it once forgotten.


Have you read the article? One of his arguments is that math, due to its interface, is not about ideas but about abstract symbols and "symbol-pushing tricks". For example many people know how to use the chain rule when they derive a function, without even having the slightest grasp of what is going on. How is this any different from "learning a mnemonic poem to remember the law of proportion"?


I can't help but think this is completely missing the point.

TED has had people cover this a number of times, but the problem with math isn't that the the _syntax is too hard to parse_ its that the _problems in math class are stupid_.

"The bucket is depth X, diameter D and full of water. When you open the tap, how long will it take to drain if the water drains out at rate Z?"

I've never had to apply solving a problem like this, in my entire life (and if I were in a job that I _did_ have to, the complexity of a real life situation would mean I would still have to learn domain specific tools to solve the problem (eg. how big is the air intake? Does that limit the rate of flow? etc)).

Tangible problems in the real world require mathematical models (often probabilistic models) to solve them.

How do you take a real world problem, break it down into bits, and then use the mathematical tools available to solve them?

By creating a model, and then guessing what the rules that govern that model are, then comparing the model to reality, and refining the rules; and when you can't figure out the rules, thats when its time to whip out the text book and say, well, guess what, someone has had that problem before and this is how they solved it~

Teaching kids how to create models and reach out into the mathematical library available to them when they need it would be vastly more helpful than trying to creating more abstract alternative ways of understanding obscure math concepts that will never to relevant to them.

I've never been more frustrated than I was the other night when I was at a party and a boiler maker (who incidentally earns 3x what I do. damn mining boom) was telling me about all the cool math he's learnt since he started his job. It's all geometry and rate of flow differentials and he said "why did I have to learn matrices at school? total waste of time. they should have been teaching us useful things"


I find the syntax of math very hard to parse (and I write and use parsers as part of my job).

For people who "get" math, I agree it's very difficult to understand how another, obviously intelligent person can find math difficult to understand and use – especially someone who has put in substantial, sustained effort and has zero problems with the command line, programming, algorithms, parsers and just computer science in general. It doesn't make any sense.

Except they do exist (I'm living proof).

Frankly, I'm baffled by the problem. I've read numerous books on math, taken tons of courses, and spent quite a bit of time trying to figure out why math, as a tool, is out of reach to me.

Sure, I can apply the "rules" at a purely syntactic level, but a good example of where math-like thinking is needed as a tool is with a language like Haskell, which I also find completely opposed to how I think about and see the world. I wish I knew why, because I'd like to use it. :(

So while I think the presentation is at least part of the problem, there's something about mathematics that needs to "click" before you can really make use of it, and that hasn't happened for me yet, despite literally _decades_ of trying.

If anyone has pointers for books, I'll be checking back here for comments.


> "why did I have to learn matrices at school? total waste of time.(...)" Unless you're doing game programming as a hobby, or encounter problems that can be modeled as a set of equations, which then can be easily solved using few matrix tricks, giving you lots of fun and $$$.

But I do agree with your sentiment; even though math can be fun on its own, it's easier (and probably better) to get people interested in it by showing how to use it to solve complex problem. After all, math is just formalized, applied rational thinking.


The problem with what you write is that it is very hard, if not impossible to understand these highly complex probabilistic models if you have not first practiced with simpler ones.


OK so it takes years to learn Japanese but it is the only way to truly enjoy haikus...

The la guage of modern maths is what has enabled the physics at the LHC. You cannot get the latter without the former, hard as you may try.


It will be hard to come up with an accessible general framework to express general problem situations without resorting to either (a) an explosion of templates for solving the N-most-common kinds of problems (the browsing through which will be worse than learning the math and the programming); (b) a set of elaborate but limiting special-purpose programs (much like iPhone apps that might help, for example, a metal welder to choose stock feed rate, gas mix/flow, and electric current depending on metal type and thickness to be joined; or mortgage calculators).

Worse yet, it doesn't take much for interesting problems to get into the land of over- or under-constraint. In an under-constrained problem there are a multitude of solutions forming their own K-dimensional space, and then one likely should apply some secondary optimization criteria to determine the best solution. In over-constrained problems there is no exact solution, but again one needs to impose some optimization to get something "close enough" to the desired criteria, according to some norm. Explaining the need for optimization, letting the user explore the solution space, and having them express these optimization criteria will be tough.

A Google search for "geometric constraint solver" leads to this paper: http://www.cs.purdue.edu/homes/cmh/distribution/papers/Const... -- what appears to be a lot of work just in helping people/machines solve geometric problems (though surely more widely applicable when taken in the abstract). Coming up with even more general "hands-on" explorers/solvers will be a lot of work.


For the more mathematically inclined, there's a really nice little book, "The Computer as Crucible" by Borwein and Devlin, about experimental math. It makes a really convincing case, that I think couldn't have been made 15 years ago, that computers really have something essential to contribute to mathematics now. I think Borwein has written a number of articles about the same, and Devlin had a column in the AMS Notices about computer math.


Tell me if I'm wrong, but I have this theory that the whole Math could be expressed on top of a single basic operation, which is addition.

For example 2 * 3 is (2 + 2 + 2).

  Or sin(7) is (7 * (1 + -0.1666666664*(7*7) + 0.0083333315*(7*7)*(7*7)
  + -0.0001984090*(7*7)*(7*7)*(7*7) + 0.0000027526*(7*7)*(7*7)*(7*7)*(7*7)
  + -0.0000000239*(7*7)*(7*7)*(7*7)*(7*7)*(7*7)))
(the multiply operator left in for brevity, but can be easily replaced by addition)

If my assumption is true, then Math is really just an arcane and archaic form of a computer-like language.

And at the same time, it's possible that the operations meant by Math's various symbols, however arcane, can be understood much better (especially with the use of modern tech, animations, etc.) than we understand them from classic texts, but those who learnt them haven't spent enough effort to create more visual explanations of it, based on which others, and also themselves, could understand those operations better, and then be able to build on top of them and even evolve Math further. (I know that I don't know what I don't know, but I have tendency to assume that I know a lot more than I really know, thus I don't explore the known deeper/further.)


It sounds like you're thinking of (or working towards) Peano Arithmetic¹, which (to oversimplify) basically starts with "0", "1", "=" and "+", and works from there. Whether that's "enough" rather depends on what you're trying to achieve, though. There's subgenres of mathematics that don't necessarily involve numbers at all, like geometry and topology. It would be difficult to reduce them to addition.

¹: http://en.wikipedia.org/wiki/Peano_axioms


Addition is a function and so functions would be a more basic concept. But functions are sets and sets are a more basic concept. Thus we should study sets. Everything comes from sets. Replace addition with sets and your first sentence would be closer to the truth.

I strongly disagree with your last paragraph though.


And you can take sets as something built on top of categories (specifically, topoi), so category theory is more foundational than set theory.

http://plato.stanford.edu/entries/category-theory/

http://sakharov.net/foundation.html

http://en.wikipedia.org/wiki/Topos


So you think the whole of math is just numerical computation? You think there is nothing more to math than computing values of certain classes of functions? This is what you get when your only exposed to so called higher math is calculus and perhaps a few courses on differential equations, with emphasis on numerical methods.

There are areas of math that don't fit this simplistic view, things that don't have analytic in their name, like topology, non-Euclidean geometry, mathematical logic etc. Algebra in general is all about studying structure that ensues when you have bunch of objects and operations you can do with them with certain properties. This structure doesn't depend on the nature of object or operation. For example rotation group etc. Then there are things like measure theory, fractals, and things like non-commutative operator algebras that are useful for applications in quantum mechanics. All in all a huge chunk of math is not about computation. Bigger in fact than the part about actual computation, esp. computation of real numbers.


I had a similar reaction after reading the beginning of this article. I feel as though modern algebra uses symbolic manipulation as a means of qualitative discovery and understanding.


> When most people speak of Math, what they have in mind is more its mechanism than its essence. This "Math" consists of assigning meaning to a set of symbols, blindly shuffling* around these symbols according to arcane rules, and then interpreting a meaning from the shuffled result. The process is not unlike casting lots.*

Wow, sounds like someone doesn't really understand math.


Brett Victor is very accomplished. I'm pretty sure he understands math. Personally speaking if Brett Victor had a different opinion about something than me than I'd pay careful attention, because I'm probably missing something.


jpdoctor is a bomb-thrower, your words fall on deaf ears.


That's precisely his point. The "UI" for math obfuscates what it's actually about. Math is full of purity and simplicity, but the notation and algorithms we start out teaching kids are anything but.


This is interesting, I especially like the Scrubbing Calculator. While I am not sure how it would handle math above Algebra 1, I think this could be great for building intuition in Pre-Algebra and Algebra 1, which are the most important years of math education, in my opinion.

I tutor some 11-14 year-olds who are studying these sorts of things, and it's incredible how many of them can look at something like 100/0.08 and have absolutely NO idea what neighborhood the answer should be in. (For example, they might accidentally multiply, get 8, and never think twice that the answer couldn't possibly be 8 because it has to be above 100.) Something like this might be really helpful in building intuition about the relationships between numbers.


Oh FFS. Everything has a barrier to entry. Just work with it until you get over it.

People want instant gratification these days. It takes time to understand stuff.

The Internet (and crappy TV about popstars and shit) has lead to people having completely unrealistic expectations of everything.


> Oh FFS. Everything has a barrier to entry. Just work with it until you get over it.

The problem is that most people's response is "I'd rather give up on it than get over it", so we wind up with a math-illiterate society. If we can make it easier to be math literate, then we encourage a better world. (Your response, which is quite correct, reminds me of http://www.marco.org/2012/02/25/right-vs-pragmatic.)


You can use precisely the same arguments to denigrate demotic script versus hieroglyphs, and say that anyone who thinks writing complex pictures is too difficult a way to express themselves is just whining.

I'm not taking a position on the argument expressed in the article; I'm merely noting that your argument is so infinitely applicable (even recursively applicable, back every step of every path of technological and notational improvement) that it is effectively useless as a way to determine the best path to follow.


I see more people agree with you than with me, despite the fact upvotes aren't meant to be used that way. HN can be trying at times, can't it?


Indeed - but that is the nature of the Internet. There should always be a counterpoint to a good view.


His "interactive exploration of a dynamical system" looks like some very sweet software.

It's not an entirely new concept though. Modeling tools like Vensim and Stella do something very similar, modeling differential equations with "stock and flow" diagrams. Modeling problems with tools like these is a pretty active field, with applications in economics, ecology, and engineering. There are a fair number of books about it.

Vensim has a free "personal" version, if anyone wants to play around with this stuff. It's definitely not as elegant as Bret Victor's demo, but might be better for more complicated systems (depending of course on how much Bret left out of the demo).


I pretty much read everything Bret Victor produces these days given he is responsible for one of the most insightful presentations I've ever seen: http://vimeo.com/36579366


Thanks, I did not realize it was the same guy, I still have that video in watch-later.


Thank you for this link. I have a feeling it will change my life.


The best talk I've ever seen.



Are there any good apps for the iPad that let you manipulate variables and equations in a tactile manner? I have Ovium which is a calculator where you put your numbers in bubbles and then connect them however you want with different operators. I'd like like something where you could use the value of a slider to change the value of a variable.


Quite aside from any of the main points being made here, the guy is dead on about Strogatz's book. I had the same reaction to that book as I did to Lipovača's Learn You A Haskell - it's a great accessible resource to learn something new and interesting (if perhaps not immediately useful). Would appeal to the HN crowd I imagine.


I really like the Soulver program that he mentions on the page for the "Scrubbing Calculator". I found a similar Windows/.NET program called OpalCalc: http://www.skytopia.com/software/opalcalc/

Time to do a web edition? Surely it already exists. Somebody help me out :P


I had thought the movement to kill math ended long ago with this video:

http://www.youtube.com/watch?v=MfgX0fyNeLc


No, it was done in by 2: http://dtecomic.com/?n=2.


He's identified a real problem and is going in the right direction.

I don't think we need to go quite as far though to make an improvement.

A simple observation that I find amazing that more people don't see, is this:

Mathematics is obfuscated code that doesn't compile or run.

One letter identifiers and odd symbols rather than useful textual descriptions.

No way to drill down into the meaning of the symbols.

With code you can take a higher-level function and look at the code for the functions it uses, then look at the implementation of the language, then look at the assembly language, etc. No such thing in mathematics.


Fix your scrolling; I think your message will benefit from increased usability.


Someone doesn't know the formal definition of computation. This is all there is to it really.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: