No. Programming is logic, and logic is a superset of math. What you need to make good programs isn't some obscure, arcane language that the few who actually understand it (mathematicians) assert to be the universal answer to everything, yet refuse to make it more accessible (when it could be), what you need is logical thinking and the ability to think in abstractions.
Yes, studying math will teach you this, because - as mentioned - math is a subset of logic. But studying math by far is not the only way to gain these abilities, or to obtain a deeper understanding of CS problems.
This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit. If we want to get more people to be interested in and eventually deeply understand CS, we need to get rid of the math barrier. Math is a tool used by computer scientists, and not the fundamental basis of our field. Limiting yourself to a single tool is bad, and you don't need abstract algebra or calculus to understand this.
This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit.
Caution is advised when interpreting opposition to a position as justification for the position. This kind of tenuous logic is frequently used to justify continued oppression in e.g. obscure religious communities ("They all hate us, so we must be right!"). I'm not saying your position is invalid (nor as a non-mathematician am I qualified to do so), just that it could be better supported.
True, I shouldn't argue like this. Thanks for pointing out that fallacy.
What I've been trying to convey is that I've come with what I think is a sensible position, and often got responses that ranged from fervent opposition to (very occasionally) verbally violent backlash. Maybe it was just me incorrectly stating my point, though.
Why do you claim logic is a superset of math? My understanding of Gödel's incompleteness theorems is they show you can't reduce arithmetic to a consistent & complete logic.
Well Godel had to use Logic to prove this. I guess what he meant is logic allows mathematicians to talk about mathematics at a meta level. I wouldn't say a superset of Math since it's still very much mathematics.
A more recent example I've come across are n-dimensional spaces. I realized that most people have problems to make sense of the concept behind that because they've been stuck in thinking mathematically.
I think of points in an n-dimensional space as objects holding n different types of information, which is a practical approach, and has lit a candle for some other people I explained it to this way as well.
For example, images are often stored as 4 dimensional structures, with the dimensions representing height, width, depth and color value. Every point in that structure is a pixel of the image. You can easily expand this to any number of dimensions, and it still makes sense - you're just adding information (eg, add a fifth dimension for time, and you've got yourself a sequence of images - a simple video).
My beef with the way math is often approached is that on the one hand, it's asserted that math is the universal solution to all problems - including practical ones - but on the other hand, more often than not math is removed so far from reality that it's nearly impossible to make any practical use of it.
The way you think of it is precisely the mathematical way of thinking about n-dimensions.
Mathematically, an n-dimensional space is R^n, that is, the cross product of the real numbers (R) with itself n times. Each copy of R lets you specify a number independently of all the others.
Mathematics is just careful abstraction. If you are abstracting away certain aspects of your problem and carefully making deductions about what is left, you are doing mathematics. As you point out, this can be taken too far but any tool can be used poorly in the hands of a novice.
>Mathematically, an n-dimensional space is R^n, that is, the cross product of the real numbers (R) with itself n times. Each copy of R lets you specify a number independently of all the others.
See, that's exactly what I'm talking about. This gibberish makes sense to no one except a mathematician. The explanation I've given - while maybe the same in essence - conveys the meaning in a comprehensible way that's not completely removed from reality.
My point is that mathematicians often do neither realize that they are talking gibberish that makes no sense to anyone but themselves nor do they accept that math without proper context is devoid of any meaning (outside of math, that is).
NOTE: I would like to clear up that I'm not attacking math itself. I neither think math for math's sake is bad, nor do I think math is useless. I respect math as a self-contained field. The problems I'm pointing out are all happening where there is application within other fields - such as CS.
I certainly agree that an intuitive understanding of a concept can be helpful as a guide and emotionally satisfying. However, I don't think that it is at all sufficient and it is precisely "mathematical gibberish" which resolves the problem.
An isolated concept is worthless. It is only when you are about to apply it by reasoning with it that becomes valuable. The problem with intuitive explanations is that they don't nail down enough details to allow a person to reason with them.
"I think of points in an n-dimensional space as objects holding n different types of information"
Imagine walking into a room and drawing a straight line on the floor. What is the dimension of that line?
Answer = One dimensional.
Proof: We can describe each point by one type of information. Point = (Distance of that point from the start of the line.)
Answer = Two dimensional.
Proof. We can describe each point by two types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall.)
Answer = Three dimensional.
Proof: We can describe each point by three types of information. Point = (Distance of that point from the East wall, Distance of the point from the North wall, Distance of that point from the roof.)
Answer = Four dimensional....
So by the intuitive explanation we can make this single line any dimension that we want.
This isn't just a problem within mathematics. For a simple programming example:
Question: How does a computer program work?
Intuitive Answer: You give the computer a list of instructions for it to carry out.
Result: The guy opens up notepad and types in "Make a computer game where I walk around shooting Zombies."
>So by the intuitive explanation we can make this single line any dimension that we want.
And if this makes sense in the given context - sure, why not?
You seem to be missing my point. What I'm saying is that it's useless to have totally generalized abstractions (outside of pure math) since more often than not, they are so far removed from reality that most people can no longer make any connection to use cases.
My entire argument is that there's a heavy communication failure between mathematicians and scientists of every other field where math is used as a tool. Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians. It doesn't justify pushing this jargon on other fields.
Besides which, convenience is no excuse for making something hard to understand. Sure it's convenient to name all your variables in a program a, b, c etc but you're going to get lynched by any programmer that tries to read your code later, including yourself.
When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".
>This isn't just a problem within mathematics. For a simple programming example:[...]
I believe in giving simple explanations and expanding them whenever there's a loophole that needs to be fixed. You don't make programs that cover every single niche use case, either (if you don't have to, that is). That's the problem with math - the generalizations, while useful in math itself, are sheer overkill in many situations outside of math.
Lastly, sorry taking this out of order, but:
>An isolated concept is worthless.
So is a generalized abstraction without any context. I'm saying that the sweet spot is somewhere in between for most people to understand and apply concepts, and that it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.
The way you are talking about jargon in mathematics suggests you have a limited experience of what mathematicians do. Here's an illustrative example of mathematics as done by mathematicians.
~~~~~~~~~~~
Define: An integer n is `even' if there exists some integer m such that n = 2m.
Theorem: For any two even integers n and a, the sum n + a is an even integer.
Proof: Since n and a are even there exist integers m and b such that n = 2m and a = 2b. Now,
n + a = 2m + 2b; by assumption
= 2(m+b); by the distributive property
= 2z; for the integer z = m+b
Therefore there exists some integer z such that n+a = 2z. Hence n+a is even.
~~~~~~~~
That is to say, in mathematics we introduce some definitions/gibberish/jargon (in this case `even') and then we use logic to reason about the implication of our choice of definition (the sum of even integers being even.)
The important thing is that the definition plays an essential role; definitions are the building blocks on which all of mathematics operates. To emphasize: if you strip away the definitions you literally have nothing to build on - we can't apply logic to nothing and arrive at something.
This leads to the point I made in my earlier comment: the reason we need definitions rather than intuitive explanations is that you can't logically reason about a concept unless you nail down the relevant details of what that concept is exactly. We can't do the 'proof the theorem' part of the above example.
So how does mathematics then fit into application?
Guy 1: In this basket I have as many stones as I have fingers and in that basket I have as many stones as I have toes. For each basket I can pair up the stones so that each has a partner. Will this still be the case if I combine the stones from each basket?
Mathematician: Well, lets represent the number of stones in each basket with the integer 10. Pairing stones corresponds to the integer being even and combining the baskets corresponds to adding the two integers. I note that 10 is even since 10 = 2x5 and so I can apply my theorem to conclude that the sum 10+10 is even. Thus I conclude that when you combine the baskets you will still be able to pair each stone with a partner.
Guy 1: Wait, wait, wait! I don't understand this 'even' jargon. Do it again without the jargon.
Mathematics: The definition of 'even' was central to my whole processes. Without it I can't even set up the problem, let alone apply the theorem used to justify the answer. I could perhaps just give you an answer, "MATHEMATICS SAYS YES", but then you wouldn't be able to repeat it yourself for different numbers of stones.
If the above is understood then I can quickly address the claims you have made.
> it's better to generalize upwards from reality and actual use cases instead of starting utterly removed from reality and trying to apply the generalizations downwards.
Mathematics is generalisation utterly removed from reality. This is why we have "Adding integers" and not "Adding together collections of dogs" and "Adding together collections of apples" and "Adding together collections of hats" and ...
> Sure, it's convenient for a mathematicians to be able to use shorthand gibberish to talk to other mathematicians.
Mathematics is the practice of defining new gibberish and then reasoning about that gibberish. The gibberish isn't a shorthand for something, it is the thing.
> It doesn't justify pushing this jargon on other fields.
Mathematics is definitions/gibberish/jargon. Applying mathematics to a field thus means applying definitions/gibberish/jargon to that field.
> When it comes to a point where gibberish becomes the only way to explain mathematical abstractions, then you should step back and ask yourself "where the hell did this go wrong?".
At least since Euclid's formulation of geometry.
>> So by the intuitive explanation we can make this single line any dimension that we want.
> And if this makes sense in the given context - sure, why not?
The problem is that it doesn't. Your explanation of an n-dimensional space is more a description of the larger space in which our space of interest is embedded.
In all instances the space (the line) remains unchanged, the only thing which changes is how we are describing it. For the dimension of the space to be a property of the space it needs ignore how we choose to describe it.
Thank you! I've had this same feeling for a long time. It strikes me as odd that probably a lot of the same people that would find it unacceptable to write cryptic C code with one-letter variable names, find typical math notation/jargon to be completely fine and legible. If code is so important that we go out of our way to make it comprehensible to future maintainers, why don't we have the same feelings about math? (which I would argue is much more important for people to comprehend, seeing as how it's the core foundation of all of science)
Personally, I'm really into physics, so I've always really wanted to like math so that I could delve deeper into the subject more easily. It's not that I'm terribly bad at it, but most math texts are so obscenely terse and cryptic, that it makes you wonder whether the authors are actually even trying to teach people about what they're talking about...
I was writing it mathematically as I thought you would appreciate how your intuitive grasp of what n-dimensions is described mathematically.
The terminology (gibberish) I used is nothing more than a convenient but very precise shorthand for communicating abstractions. The reason these particular abstractions are given special names by mathematicians is that they occur over and over again. There is a lot of conceptual leverage to be gained if you are able to see the same patterns in seemingly unrelated problems.
As Poincaré put it, "Mathematics is the art of giving the same name to different things". I'd argue it's worth knowing some of those names.
That's totally the standard way that all mathematicians think about n-dimensional spaces.
I've never heard a mathematician claim math is the universal solution to all problems... sounds like you had a bad high school experience. Do you need a math-hug? :)
>sounds like you had a bad high school experience.
Not exactly high school. I'm a university student in Germany.
But yes, it's pretty much a bad experience. I've seen quite a lot of other students which are otherwise bright persons struggle with math simply because they aren't good at learning to interpret arcane sequences of symbols which pretty much represent nothing relevant to reality. Math the way it is taught here is essentially useless for 95% of all CS students.
I think you are obsessing about minutiae. Perhaps mathematical instruction is very different in Germany than in the United States, or perhaps our brains are simply incompatible, but I've never found symbology to significantly affect my understanding.
Symbols are just convenient and useful abbreviations. In my experience, they facilitate both speed and precision in mathematical work, far and above what is possible in a natural language like English. There is no particular reason why certain concepts are associated with certain symbols apart from convention. Any decent mathematician should be able to operate with any set of symbols, although not necessarily with equal proficiency.
I won't deny that many mathematicians have a hard time expressing themselves in natural-language speech and writing. Perhaps for them the symbols have become a crutch. But I still think you have erected a mental block around an ultimately inconsequential aspect of mathematics, which is needlessly impeding your understanding.
That having been said, I think there are definitely bad ways to approach the use of symbols in instruction. Concepts and their related symbols should be introduced incrementally, and you should always have ample opportunity to practice with the ones you have just learned before having to learn new ones. I think that a well rounded mathematics curriculum should include a course with a significant focus on understanding and manipulating common mathematical and logical techniques, with mathematical language being a significant if not necessary part thereof.
>I won't deny that many mathematicians have a hard time expressing themselves in natural-language speech and writing.
Yes, that's exactly what I've been getting at. As I said in other comments, I respect math as a field. What I'm pointing out is that there's a heavy communication problem between math and other fields, and this is something that needs to be solved.
I studied math and CS in Germany. The mathematicians were usually adept at CS, but the computer scientist didn't--with some exceptions of course--even get what math was about at all.
They should probably stop trying to dumb down the math for computer scientist. Math is hard, and pretending otherwise, won't save one from reality.
Yes, studying math will teach you this, because - as mentioned - math is a subset of logic. But studying math by far is not the only way to gain these abilities, or to obtain a deeper understanding of CS problems.
This opinion alone will get (and has got) me branded as a heretic amongst many people, and that alone proves how deeply we are stuck in this tar-pit. If we want to get more people to be interested in and eventually deeply understand CS, we need to get rid of the math barrier. Math is a tool used by computer scientists, and not the fundamental basis of our field. Limiting yourself to a single tool is bad, and you don't need abstract algebra or calculus to understand this.