Hacker News new | past | comments | ask | show | jobs | submit login
Equations True Computer Science Geeks Should (at Least Pretend to) Know (elegantcoding.com)
288 points by gmoes on Nov 28, 2011 | hide | past | favorite | 96 comments



Well, after 6 years of professional software engineering after finishing my BS in CS, the only things on that list that I've came anywhere close to using are the natural join and Demorgan's laws.

I think this is a pretty silly post, to be honest. CS covers so much, and everytime I see a list of "things you should know", I have to resist the urge to roll my eyes and ignore it. But then I read it, and inevitably roll my eyes anyway.


At two jobs now I've had a conversation with talented comp sci grads and we all agreed 99% of programming is drudgery and the 1% that requires you to turn off your music and sit in dead silence is rare and still not as challenging as comp sci syllabuses (syllabi?) would suggest.

An unfortunate side-effect is that many people who could have become highly productive mid-level programmers are scared off because they don't know advanced math.


I disagree. Programming is math. Highly advanced math, in fact. It's just a different type of math. And the 11 equations in the OP's article just barely touches on what CS is about. There is far more to it than that.


No. Programming is logic, and logic is a superset of math. What you need to make good programs isn't some obscure, arcane language that the few who actually understand it (mathematicians) assert to be the universal answer to everything, yet refuse to make it more accessible (when it could be), what you need is logical thinking and the ability to think in abstractions.

Yes, studying math will teach you this, because - as mentioned - math is a subset of logic. But studying math by far is not the only way to gain these abilities, or to obtain a deeper understanding of CS problems.

This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit. If we want to get more people to be interested in and eventually deeply understand CS, we need to get rid of the math barrier. Math is a tool used by computer scientists, and not the fundamental basis of our field. Limiting yourself to a single tool is bad, and you don't need abstract algebra or calculus to understand this.


This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit.

Caution is advised when interpreting opposition to a position as justification for the position. This kind of tenuous logic is frequently used to justify continued oppression in e.g. obscure religious communities ("They all hate us, so we must be right!"). I'm not saying your position is invalid (nor as a non-mathematician am I qualified to do so), just that it could be better supported.


True, I shouldn't argue like this. Thanks for pointing out that fallacy.

What I've been trying to convey is that I've come with what I think is a sensible position, and often got responses that ranged from fervent opposition to (very occasionally) verbally violent backlash. Maybe it was just me incorrectly stating my point, though.


Why do you claim logic is a superset of math? My understanding of Gödel's incompleteness theorems is they show you can't reduce arithmetic to a consistent & complete logic.


Well Godel had to use Logic to prove this. I guess what he meant is logic allows mathematicians to talk about mathematics at a meta level. I wouldn't say a superset of Math since it's still very much mathematics.

http://en.wikipedia.org/wiki/Metamathematics


Do you have an example of mathematics that could be made more accessible but hasn't been?


A more recent example I've come across are n-dimensional spaces. I realized that most people have problems to make sense of the concept behind that because they've been stuck in thinking mathematically.

I think of points in an n-dimensional space as objects holding n different types of information, which is a practical approach, and has lit a candle for some other people I explained it to this way as well.

For example, images are often stored as 4 dimensional structures, with the dimensions representing height, width, depth and color value. Every point in that structure is a pixel of the image. You can easily expand this to any number of dimensions, and it still makes sense - you're just adding information (eg, add a fifth dimension for time, and you've got yourself a sequence of images - a simple video).

My beef with the way math is often approached is that on the one hand, it's asserted that math is the universal solution to all problems - including practical ones - but on the other hand, more often than not math is removed so far from reality that it's nearly impossible to make any practical use of it.


The way you think of it is precisely the mathematical way of thinking about n-dimensions.

Mathematically, an n-dimensional space is R^n, that is, the cross product of the real numbers (R) with itself n times. Each copy of R lets you specify a number independently of all the others.

Mathematics is just careful abstraction. If you are abstracting away certain aspects of your problem and carefully making deductions about what is left, you are doing mathematics. As you point out, this can be taken too far but any tool can be used poorly in the hands of a novice.


>Mathematically, an n-dimensional space is R^n, that is, the cross product of the real numbers (R) with itself n times. Each copy of R lets you specify a number independently of all the others.

See, that's exactly what I'm talking about. This gibberish makes sense to no one except a mathematician. The explanation I've given - while maybe the same in essence - conveys the meaning in a comprehensible way that's not completely removed from reality.

My point is that mathematicians often do neither realize that they are talking gibberish that makes no sense to anyone but themselves nor do they accept that math without proper context is devoid of any meaning (outside of math, that is).

NOTE: I would like to clear up that I'm not attacking math itself. I neither think math for math's sake is bad, nor do I think math is useless. I respect math as a self-contained field. The problems I'm pointing out are all happening where there is application within other fields - such as CS.


I certainly agree that an intuitive understanding of a concept can be helpful as a guide and emotionally satisfying. However, I don't think that it is at all sufficient and it is precisely "mathematical gibberish" which resolves the problem.

An isolated concept is worthless. It is only when you are about to apply it by reasoning with it that becomes valuable. The problem with intuitive explanations is that they don't nail down enough details to allow a person to reason with them.

"I think of points in an n-dimensional space as objects holding n different types of information"

Imagine walking into a room and drawing a straight line on the floor. What is the dimension of that line?

Answer = One dimensional. Proof: We can describe each point by one type of information. Point = (Distance of that point from the start of the line.)

Answer = Two dimensional. Proof. We can describe each point by two types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall.)

Answer = Three dimensional. Proof: We can describe each point by three types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall, Distance of that point from the roof.)

Answer = Four dimensional....

So by the intuitive explanation we can make this single line any dimension that we want.

This isn't just a problem within mathematics. For a simple programming example:

Question: How does a computer program work?

Intuitive Answer: You give the computer a list of instructions for it to carry out.

Result: The guy opens up notepad and types in "Make a computer game where I walk around shooting Zombies."


>So by the intuitive explanation we can make this single line any dimension that we want.

And if this makes sense in the given context - sure, why not?

You seem to be missing my point. What I'm saying is that it's useless to have totally generalized abstractions (outside of pure math) since more often than not, they are so far removed from reality that most people can no longer make any connection to use cases.

My entire argument is that there's a heavy communication failure between mathematicians and scientists of every other field where math is used as a tool. Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians. It doesn't justify pushing this jargon on other fields.

Besides which, convenience is no excuse for making something hard to understand. Sure it's convenient to name all your variables in a program a, b, c etc but you're going to get lynched by any programmer that tries to read your code later, including yourself.

When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".

>This isn't just a problem within mathematics. For a simple programming example:[...]

I believe in giving simple explanations and expanding them whenever there's a loophole that needs to be fixed. You don't make programs that cover every single niche use case, either (if you don't have to, that is). That's the problem with math - the generalizations, while useful in math itself, are sheer overkill in many situations outside of math.

Lastly, sorry taking this out of order, but:

>An isolated concept is worthless.

So is a generalized abstraction without any context. I'm saying that the sweet spot is somewhere in between for most people to understand and apply concepts, and that it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.


The way you are talking about jargon in mathematics suggests you have a limited experience of what mathematicians do. Here's an illustrative example of mathematics as done by mathematicians.

~~~~~~~~~~~

Define: An integer n is `even' if there exists some integer m such that n = 2m.

Theorem: For any two even integers n and a, the sum n + a is an even integer.

Proof: Since n and a are even there exist integers m and b such that n = 2m and a = 2b. Now,

  n + a = 2m + 2b; by assumption
        = 2(m+b); by the distributive property
        = 2z; for the integer z = m+b
Therefore there exists some integer z such that n+a = 2z. Hence n+a is even.

~~~~~~~~

That is to say, in mathematics we introduce some definitions/gibberish/jargon (in this case `even') and then we use logic to reason about the implication of our choice of definition (the sum of even integers being even.)

The important thing is that the definition plays an essential role; definitions are the building blocks on which all of mathematics operates. To emphasize: if you strip away the definitions you literally have nothing to build on - we can't apply logic to nothing and arrive at something.

This leads to the point I made in my earlier comment: the reason we need definitions rather than intuitive explanations is that you can't logically reason about a concept unless you nail down the relevant details of what that concept is exactly. We can't do the 'proof the theorem' part of the above example.

So how does mathematics then fit into application?

Guy 1: In this basket I have as many stones as I have fingers and in that basket I have as many stones as I have toes. For each basket I can pair up the stones so that each has a partner. Will this still be the case if I combine the stones from each basket?

Mathematician: Well, lets represent the number of stones in each basket with the integer 10. Pairing stones corresponds to the integer being even and combining the baskets corresponds to adding the two integers. I note that 10 is even since 10 = 2x5 and so I can apply my theorem to conclude that the sum 10+10 is even. Thus I conclude that when you combine the baskets you will still be able to pair each stone with a partner.

Guy 1: Wait, wait, wait! I don't understand this 'even' jargon. Do it again without the jargon.

Mathematics: The definition of 'even' was central to my whole processes. Without it I can't even set up the problem, let alone apply the theorem used to justify the answer. I could perhaps just give you an answer, "MATHEMATICS SAYS YES", but then you wouldn't be able to repeat it yourself for different numbers of stones.

If the above is understood then I can quickly address the claims you have made.

> it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.

Mathematics is generalisation utterly removed from reality. This is why we have "Adding integers" and not "Adding together collections of dogs" and "Adding together collections of apples" and "Adding together collections of hats" and ...

> Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians.

Mathematics is the practice of defining new gibberish and then reasoning about that gibberish. The gibberish isn't a shorthand for something, it is the thing.

> It doesn't justify pushing this jargon on other fields.

Mathematics is definitions/gibberish/jargon. Applying mathematics to a field thus means applying definitions/gibberish/jargon to that field.

> When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".

At least since Euclid's formulation of geometry.

>> So by the intuitive explanation we can make this single line any dimension that we want. > And if this makes sense in the given context - sure, why not?

The problem is that it doesn't. Your explanation of an n-dimensional space is more a description of the larger space in which our space of interest is embedded.

In all instances the space (the line) remains unchanged, the only thing which changes is how we are describing it. For the dimension of the space to be a property of the space it needs ignore how we choose to describe it.


Thank you! I've had this same feeling for a long time. It strikes me as odd that probably a lot of the same people that would find it unacceptable to write cryptic C code with one-letter variable names, find typical math notation/jargon to be completely fine and legible. If code is so important that we go out of our way to make it comprehensible to future maintainers, why don't we have the same feelings about math? (which I would argue is much more important for people to comprehend, seeing as how it's the core foundation of all of science)

Personally, I'm really into physics, so I've always really wanted to like math so that I could delve deeper into the subject more easily. It's not that I'm terribly bad at it, but most math texts are so obscenely terse and cryptic, that it makes you wonder whether the authors are actually even trying to teach people about what they're talking about...


I was writing it mathematically as I thought you would appreciate how your intuitive grasp of what n-dimensions is described mathematically.

The terminology (gibberish) I used is nothing more than a convenient but very precise shorthand for communicating abstractions. The reason these particular abstractions are given special names by mathematicians is that they occur over and over again. There is a lot of conceptual leverage to be gained if you are able to see the same patterns in seemingly unrelated problems.

As Poincaré put it, "Mathematics is the art of giving the same name to different things". I'd argue it's worth knowing some of those names.


That's totally the standard way that all mathematicians think about n-dimensional spaces.

I've never heard a mathematician claim math is the universal solution to all problems... sounds like you had a bad high school experience. Do you need a math-hug? :)


>sounds like you had a bad high school experience.

Not exactly high school. I'm a university student in Germany.

But yes, it's pretty much a bad experience. I've seen quite a lot of other students which are otherwise bright persons struggle with math simply because they aren't good at learning to interpret arcane sequences of symbols which pretty much represent nothing relevant to reality. Math the way it is taught here is essentially useless for 95% of all CS students.


I think you are obsessing about minutiae. Perhaps mathematical instruction is very different in Germany than in the United States, or perhaps our brains are simply incompatible, but I've never found symbology to significantly affect my understanding.

Symbols are just convenient and useful abbreviations. In my experience, they facilitate both speed and precision in mathematical work, far and above what is possible in a natural language like English. There is no particular reason why certain concepts are associated with certain symbols apart from convention. Any decent mathematician should be able to operate with any set of symbols, although not necessarily with equal proficiency.

I won't deny that many mathematicians have a hard time expressing themselves in natural-language speech and writing. Perhaps for them the symbols have become a crutch. But I still think you have erected a mental block around an ultimately inconsequential aspect of mathematics, which is needlessly impeding your understanding.

That having been said, I think there are definitely bad ways to approach the use of symbols in instruction. Concepts and their related symbols should be introduced incrementally, and you should always have ample opportunity to practice with the ones you have just learned before having to learn new ones. I think that a well rounded mathematics curriculum should include a course with a significant focus on understanding and manipulating common mathematical and logical techniques, with mathematical language being a significant if not necessary part thereof.


>I won't deny that many mathematicians have a hard time expressing themselves in natural-language speech and writing.

Yes, that's exactly what I've been getting at. As I said in other comments, I respect math as a field. What I'm pointing out is that there's a heavy communication problem between math and other fields, and this is something that needs to be solved.


I studied math and CS in Germany. The mathematicians were usually adept at CS, but the computer scientist didn't--with some exceptions of course--even get what math was about at all.

They should probably stop trying to dumb down the math for computer scientist. Math is hard, and pretending otherwise, won't save one from reality.


I don't think any mathematician will assert that math is a practical subject or the universal solution.

In math, we just define n-dimensional Euclidean space, but no implication is made about the physical world.


You can disagree all you want but it is pretty much objective fact that in practice, especially in the world of business, 99% of programming is not advanced at all. CS, which is not just programming, may be advanced, but you do not see the advanced concepts emerging very often in day-to-day programming.


However, the real question is whether there's no advanced stuff because of the domain or because of the people who work there. If enough people working at a company don't like advanced math, it means that they will not use it and if you use it, your code will be harder for them to maintain, so you don't use it either. I suspect if everybody had a better grasp of these ideas, you would see them used more often.


There are a ton of CS-trained programmers who will include advanced stuff whether the domain requires it or not. It's probably hard to have student loans hanging over your head and not have a reason to use most of the education they paid for.


Programming is a branch of math.


For many years, computers sucked at math. In many cases the job of a programmer was not to make the computer do complicated math, but to come up with clever ways to get out of doing complicated math.

That's not really true anymore, now that so many technologies from gaming to communications rely on concepts and algorithms from higher mathematics. While I'm skeptical about the importance of the particular equations on the author's list, I don't think it's reasonable any longer for software devs to plead ignorance (your humble correspondent included).


Fair enough, but note that the list doesn't say "true software engineers should know" but "true computer science geeks should know".


My response to this is that I am a "true computer science geek". Why isn't P=NP (or P!=NP) on that list? Why not more about set theory? Why not proof by induction? How about lambda calculus?

There is far more to CS than what that list implies. I realize that it's not supposed to be all-encompassing, but seriously, saying it's "something all CS geeks should know", in my mind is unfair - I have little understanding of most of the OP's article, but still consider myself to have a very solid understanding of CS, based on my level of education.

I don't need to know most of the things in the article. There are plenty of other things in CS, however, that I do need to know.

I think putting together lists like this does a disservice to people just about to enter the field. You can't possible break down CS into a list of 11 equations.


Yeah, you're right. "10 best..." lists are usually B.S. anyway.


y-combinator is on there representing lambda calculus at least ...


Good point, I must have missed that one.


I assume you overlooked O(n). You've probably used Bayes' theorem and the binomial coefficient, if only indirectly.

But I tend to agree with you: many of these are skewed by the author's perspective on what should be considered foundational. Someone doing webdev will not ever calculate eigenvectors. Someone doing machine learning will not be able to avoid them.


I think it's important in this situations to not confuse Computer Science with what we, as "software engineers" or "applications developers" or "programmers", do for 99.9% of our day. For most of us, what we do is as much Computer Science as the use of a protractor is geometry.


Since it seems like the multicore thing is here to stay, may I suggest that if you are doing anything parallel you should know about Amdahl's law: http://en.wikipedia.org/wiki/Parallel_computing#Amdahl.27s_l...


Huh, that's interesting. I never considered the similarities between parallel processing and people management until today.



Relevant to Amdahl's Law (it being a more general form) and Queueing Theory is : http://en.wikipedia.org/wiki/Neil_Gunther#Universal_Law_of_C...


I would take issue with the pumping lemma for regular languages here. The formal statement of the lemma is outrageously complicated, which makes it really difficult to understand and use. The only good justification I’ve heard for including this result in a CS curriculum is that it’s a good warm-up for the pumping lemma for context-free languages, which is more useful.

If you actually ever find yourself needing to show that a particular language is non-regular, it’s almost always clearer to use an ad hoc argument or appeal to the Myhill-Nerode theorem. Actually the latter is much better, because Myhill-Nerode completely characterises the regular languages, whereas there are non-regular languages that pass the pumping lemma test.[1]

1. http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_langu...


>there are non-regular languages that pass the pumping lemma test

Can you give an example of that? You may be right, but I'm having a hard time coming up with a language that is not regular but fulfils the pumping lemma.


Sure, there’s an example in the Wikipedia article linked from the footnote in my comment above: the language consisting of all strings over the alphabet {0,1,2,3} with a substring of length 3 including a duplicate character, as well as all strings over this alphabet where precisely 1/7 of the string's characters are 3's.

As you can see, this example is a little complicated – and I don’t know a simpler one, so I’m not surprised you struggled to just come up with an example off the top of your head.


It'd be nice if the author actually stated why these algorithms are important to know. Give a use case, rather than "this comes up sometimes."


Agreed, this article was a very poor read from the perspective of a programmer (yours truly) having a working familiarity with only two or three of these concepts.

Given the article's slapdash, loosely-coherent English and tendency to pad out each equation's section with references to even more equations--and not explanation of usefulness--I wish I hadn't tried to read it, for it was very frustrating. I suppose the article might provoke interesting discussion among the mathy set, the same way that a list of "top patterns" might start a discussion among OO fans.


I had to actually double check if the author was from the US or not, since in some places it was just so incomprehensible.

I agree the applications were glossed over or skipped, but that is probably the most interesting aspect of the equations as well.

Without practical information, the article is sorely missing true sustenance.


This effect is often true for people whose native language is math :-) But to be fair most disciplines have their own jargon, math happens to have its own keyboard too.


Most other disciplines don't try to force their jargon onto other fields though. Math often does.

Also, I'd generally say that almost all jargon is bad. A quote that's frequently attributed to Einstein says that "If you cannot explain something in simple terms, you do not have adequately understood it". Math and science in general should take this to heart, because I feel that many people have forgotten this or never really understood it in the first place.


I really wish my brain didn't gloss over the first time I see a math symbol. All of this stuff seems intriguing but it's almost as if I'm hardwired to translate all those symbols into mush. I'd be much more interested in seeing the equivalent code snippets these ideas express.


For several of the equations presented, there's not really a "code equivalent". And for most of the others, presenting it in code form would make it much more verbose and complex than it needs to be. In a way, it's already written in code – it's just been written by mathematicians :)

It can be a bit daunting at first, but I find mathematical concepts and notation extremely useful tools for programmers, especially when designing algorithms. It allows you to express powerful, abstract ideas in a few lines whereas the same reasoning in pseudo-code would take much more time and effort.

I'm glad my undergraduate program was heavy on the math (although I hated it at the time)– it made graduate CS courses easier to understand, and programming easier for me in the long term. I actually miss my pure math courses now.


That's because mathematical notation is context-bound and telegraphic. Gerry Sussman has gone so far as to call it "Impressionistic". Beyond basic arithmetic and elementary algebra, it is much more concise than it is precise -- it is often inconsistent (the "power" in cos^2 x doesn't mean the same thing as it does in cos^-1 x, for instance), and there's a lot of hand-waviness (and sometimes some outright lies) in order to make the notation concise. It's a useful jargon for the initiated, but it is hardly self-explanatory even if you know what the individual symbols are supposed to mean.

A code-like representation may have the disadvantage of being more verbose, but it has the advantage of being unambiguous. It gives up conciseness in favour of precision. And no, it is not impossible to represent mathematical concepts in a code-like manner -- it just forces you to actually say all of what you mean rather than using an ambiguous and incomplete shorthand.


I have the same problem, but I think I figured out a way around it.

You have to fight the gloss-over effect. It takes conscious effort. Remember that "deliberate practice" all those blog entries say it takes to master something? Well, same thing. You have to deliberately overcome your brain's tendency to go "math is hard, let's go shopping!" Eventually you'll be able to look at an equation and recognize patterns. When I hit the link I was like "Oh, man, MATH, so bo-- hey wait, that's Bayes's theorem! I know this!" (Yes, just like the Jurassic Park girl.)

So keep working at it. Maybe you'll want to try -- gasp! -- actually cracking a math book and studying the material they present.


Seems like a lot of effort for an unknown gain - to me. It's about the same as reading a code block in Scala. It may or may not be useful to me but to figure that out I have to go off into a completely different world. I just wish the mathematical symbols weren't hiding so much meaning. Can I really guess what a backwards-E-equals-looking symbol is? Nope. I get it wrong every time.


I've found http://en.wikipedia.org/wiki/Table_of_mathematical_symbols to be helpful when deciphering formulas. It's hard to remember what all the notation means sometimes.


I pretty much feel the same way but I feel like I'm up for the challenge and I keep trying to cover as much as possible I can in my free time (a side-effect of my unemployment).


You better toughen up and immerse yourself in math


I actually surround myself in math all the time since I do a lot of game development. A poster above pointed to the math symbol list on wikipedia and I think "Great. I need to scour a daunting page to find the meaning of an obscure symbol?" It's pretty similar to reading a post in another language and I'm not sure it's worth my time considering by the time I figure that out I would have spent an hour decyphering the equations. Google translate for math, perhaps? I'd use it.



More fundamental than Bayes's theorem is the probabilistic counterpart of modus ponens: P(A/\B) = P(B|A) P(A). This corresponds to the logical rule of inference A, A->B |- A/\B. Note that modus ponens is usually stated in the form A, A->B |- B. But this throws away useful information, namely that proposition A is true, so it's a weaker form.

Bayes's theorem is a direct consequence of this axiom and the commutativity of conjunction.


In other words, modus ponens is: P(A=true)=1, P(B=true|A=true)=1 |- P(B=true) = 1

Let P'(A) be the distribution on A when we know nothing about B (in a world with only A and B). Plausible reasoning is possible through Bayes/joint-conditional probability rule and yields: P(B=true|A=true)=1, P(B=true)=1 |- P(A=true) > P'(A=true) where logic alone can't conclude (A->B, B |- ?) Also: P(B=true|A=true)=1, P(A=false)=1 |- P(B=true) < P'(B=true)


Although they're not really "equations", I'd have liked to see some results from computation theory in this list, because they're often deep and beautiful without necessarily being difficult to formulate or understand. They also tend to be informative in a useful way rather than feeling too abstract to be relevant.

For example: the halting problem. Isn't it interesting that you can't write a computer program which can decide (in general) whether another program terminates? The proof is simple and it gives you a real tool to help you avoid trying to solve impossible problems. It's good to know the limits of your craft.


The Y Combinator and the pumping lemma seem a bit contrived on that list, especially the former. I would add the maximum margin separation equation, which underlies many modern machine learning methods like SVMs and MMMF, and the P=NP equality question.


Understanding the pumping lemma is essential to really understanding why regular languages are limited. Which in the real world is important for quickly assessing the question of "can I hack this solution together with some clever regexps or do I need a real parser?"

I'll agree that the y-combinator is less essential, however if you even have a sense of what's going on it means that you have an understanding of the basic framework of functional programming and a minimal grasp of the lambda calculus


Not really. You can understand why regular languages are limited with a simple counting argument; the pumping lemma follows on from that relatively easily, but it doesn't really add much understanding IMO.


The pumping lemma is cool, but the Myhill-Nerode theorem is more powerful (as someone mentioned).

I think, too, that what was meant by 'contrived' was that the pumping lemma is not really an equation. But you don't get hits if you talk about the top ten theorems CS geeks should know...


Shannon's Information Theory, Eigenvector, DeMorgan's Laws, etc. None of those names are meaningful or descriptive. And then the greek letters and made up symbols. Math could learn something from Computer Science:

https://www.google.com/search?q=readable+code


Eigenvector isn't that bad, it's just not english:

"The prefix eigen- is adopted from the German word "eigen" for "own"[1] in the sense of a characteristic description" (from wikipedia)

I think Shannon's Information Theory is pretty fair too. It's about how much information is in a message.

Plus, Computer Science has it's own grievous names: A* search, B* search, B tree, B+ tree, B* tree. That always drove me nuts.


And lots of algorithms named after people. Which is fine (we have lots of things named after people in math, too), but not descriptive.


Most of math notation is optimized for allowing people talking to each other about abstract concepts to use blackboards and chalk as memory aids.

Mathematical notation is always supposed to be used in a context similar to literal programming.


I was hoping Fitts's Law would make the list, considering a lot of people here are doing UI/UX whether they realize it or not.


I'm sorry for being somewhat off topic, but is it too hard to try and write coherently? The author's run-on, stream-of-consciousness style, together with the random use of punctuation marks, was tiring to read.


Writing skills are always relevant. The lack of them in this article made me stop reading before getting to any actual content.


Indeed, and I'm not demanding that people be novelists or anything, you just have to master basic punctuation and spelling to make the reader's life easier.


P(Poster is less than 30 years old) = .9999999999999


There's a small error in the formula for O(N): the way he's written it it looks like for all n, there is a k such that kg(n) >= f(n), ie k depends on n so take k = f(n)/g(n) and all nonzero functions trivially satisfy it. It should be there exists a k such that for all n kg(n) >= f(n). Pedantic I know, but on the other hand I wouldn't call these "beautiful equations" associated with O(N), I'd instead call them the definition of O(N).

There's also o(f(n)), g(n) is a member of o(f(n)) if the limit as n goes to infinity of g(n)/f(n) is zero. Finally, there is asymptotic equality: f(n) ~ g(n) if the limit as n goes to infinite of g(n)/f(n) = 1. O,o and ~ are all subtly different, but if you're just trying to prove upper bounds then O(f(n)) is the one that comes up most frequently, which is why it's probably the only sort of asymptotic analysis most CS grads know.


Here's a real simple and practical equation: 1+2+3+4 . . . N = N(N+1)/2

This equation represents the number of edges on a complete graph with N+1 vertices or the number of possible pairings given N+1 objects. Useful when estimating O(N) for certain algorithms that involve comparing an item to every other item in a set.


It's more useful if you know how to prove it: just add the series to itself written backwards, then divide by two.

One of my math professors said that this and multiplying by things equal to one were among the tricks he used most often.


Please be careful how you use big-O notation here.


So, I take it I shouldn't even consider getting a CS degree since I really suck at math?


Mathematics is just a ladder of abstractions. If a piece of mathematics seems intimidating, it's just because you're looking too far up the ladder from where you're currently standing. Anyone can climb it, one careful step at a time, to reach a level they're interested in; by the time they get there, they'll have learned enough of the underlying abstractions to make the intimidating thing now seem obvious, or at least sensible.

So don't be afraid! (And besides, there are certainly many CS degrees which don't require anywhere near the amount of mathematical sophistication mentioned in this article.)


If you suck at math, you SHOULD get a math heavy degree if the subject interests you. That's what I did, with my EE degree. To be fair, it isn't that I sucked at math so much as I didn't know very much of it.

Still, I think you should look at college as a way to learn what you don't know and get better where you suck, rather than bolster your strengths and avoid your weaknesses. For example, I wrote really well in high school and at one point wanted to get an English degree. Then I thought, "I already know how to write, let's put that tuition money to better use" and started considering math and science heavy options.


Don't let this put you off! I have a CS degree, and I also pretty much suck at math (at least compared to many CS majors).

There are many paths you can take through a CS degree, and it's entirely possible to come out with useful and applicable knowledge without having to deep-dive in math. It just means you're more likely to end up in, say, web-dev than building 3D graphics engines. For me that's been a perfectly acceptable outcome.

And besides, a CS degree takes a few years - there's plenty of time to brush up on math if you're really interested.


(Damn it, I wish I could edit old posts...)

Thanks everyone for the words of encouragement, but when I was going to school it would take me 4 hours to complete 15 simple homework math equations, even with the ritalin. 'Hard work' in the form of 4 years of that stuff would make me commit suicide.

Moreover, I just don't care to understand complex math. I do want to know how to design compilers and write embedded kernel drivers. If advanced math is required to be able to do these things, CS isn't for me.


The not caring is kind of crucial, beyond anything else. If you don't care, it's not interesting to you, and if it's not interesting to you it's probably not worth dedicating your time to. Life is short - do what you love.

THAT SAID I totally agree with what people have said, and the amount of "maths" you need beyond basic arithmetic in CS depends very much on your chosen field of interest, which, given that you don't care about complex maths, probably won't be too mathematically heavy.


Don't let this be intimidating to you - instead of asking yourself "how come I don't know this?" ask yourself "how can I learn more about this?". This might sound cheesy and simplified but it's as simple as "nobody was born knowing this". I'm 31 and I wish I had the money to go back in education and learn all of this with the official way but for now I'm just picking resources on the web and who knows? It just might happen...

Enough about me. If you want it, go get it. Period.


Actually, the purpose of going to school and studying for four years is to learn things which you aren't already good at.


There is a growing body of science that indicates talent matters a great deal less than hard work.

Certainly I consider myself relatively slow, but with enough perseverance, I get told I am smart. :)


I have a CS degree, and a math minor. After over a decade of professional work, I remember hardly anything of my math minor, and very little of the mathematical content of my CS major.

If you want to do 'interesting' stuff, you'll need solid math skills. If your goal is to make money, you will likely do fine with average math skills. Oddly, however, many employers currently interview for CS/math knowledge even though their jobs will never, ever make use of such knowledge.


Little's Law is probably CS folks should know too. It is relatively simple, but useful

https://en.wikipedia.org/wiki/Little%27s_law


Amusing how the original wired article calls the greek lambda "the triangle thing with no bottom"


Oddly enough I never really used any of these while studying CS. Now that I'm in bioinformatics, some of the stuff is commonly used, particularly Bayes' theorem and information theory.


Bayes Theorem isn't totally at the heart of Bayesian v non-Bayesian statistics. Bayes Theorem can still be true if you're in a strictly frequentist framework.


Bayes Theorem can still be true if you're in a strictly frequentist framework.

Can?? When is it not true? It's a theorem, after all.


Nitpick much?

I meant more along the lines of when the assumptions aren't met, e.g. P(B) == 0.

Or, where unitarity in physics isn't met:

http://en.wikipedia.org/wiki/Unitarity_(physics)


I would prefer to see the Cook-Levin theorem in place of the pumping lemma.


I know all those. ish. thanks for morning ego boost!


Nope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: