Hacker News new | past | comments | ask | show | jobs | submit login
Great ideas in theoretical computer science (cs251.com)
288 points by __rito__ 6 months ago | hide | past | favorite | 93 comments



This class is great because it introduces a lot of concepts but most importantly really sharpens your problem-solving skills.

> The method we tend to use in 251 is throwing you deep into the water without a paddle (i.e., giving you very basic definitions for a new topic you haven't seen and expecting you to solve things that you might be given in a course dedicated to that topic in say the third week). We do this over and over, pretty much starting from scratch on a new topic every week. As a result, it can be very frustrating—which, of course, is the intention. If you're constantly trying to solve things that are just outside of your reach, you develop better strategies for thinking about problems you're given.


I'm not sure I necessarily agree with this strategy. In my CS undergrad, I had an algorithms class where the professor did this and expected the students to be able to do homework assignments each week that were worth a significant portion of our grade with basically no explanation on how to solve these extremely complicated optimization problems. When comparing our assignments to that of another student taking the same class with a different professor, theirs were much, much easier.

I dont think this teaching methodology made me gain any more from the course, if anything, it made me gain less because I constantly had the stress of trying to do these ridiculous problems on top of all of my other work every week


Yeah, whenever we had this issue in a class, we just ended up focussing on doing WHATEVER IT TOOK wink wink to pass.


For algorithms, sure. For this class, it had worked great when I took it 20 years ago. The class didn’t expect us to solve the problems, just understand them enough to apply them in our daily thinking for software.


Maybe I'm not reading that correctly, but it seems like you were able to do more difficult assignments than the students in the other class.


Honestly, this just sounds like a terrible way to teach.


Honestly, for a long time this was the standard way of learning in academia. Lecturers gave lectures and everything else was on the students to find out (from books).

Right now we reached the other end of the pendulum swing, where everything is hand-fed to the students.


I disagree. Doing my math degree I had constantly sat in classes where I had no idea what I was being taught. Doing my own research helped me understand the topic a lot better.


this is my favourite way to learn


If the problems are interesting, why not. If they are not and they are purely abstract without purpose, I might not feel motivated at all.


Theoretical computer science can be fun. But it can also be really annoying!

Around the time most schools were starting the 2014-15 school year I saw a post on Reddit in either a math or CS group from someone asking for how to solve a particular theoretical CS problem. The poster didn't say where the problem was from or why they needed a solution, but others quickly figured out it was from someone trying to cheat on the first homework set from COMS 331 (Theory of Computing) at Iowa State University.

No one helped and the poster deleted their post. I thought it was an interesting problem and gave it a try, expecting it to be a short but interesting diversion.

I didn't expect it to take too long because although I was not a CS major (I was a math major) I did take all of the undergraduate theoretical courses at my school and I got decent grades in them. That had been ~35 years earlier so I had forgotten a lot but the problem was from the first COMS 331 homework set so shouldn't require anything past the first week or so of that class, which should all be fairly basic stuff I would still remember.

I spent a couple days on it and got absolutely nowhere. Several times since then I've remembered it, thought about it for a few hours or a day and have continued to completely fail.

If anyone is curious, here is the problem:

Define a 2-coloring of {0, 1}∗ to be a function χ : {0, 1}∗ → {red, blue}. (For example, if χ(1101) = red, we say that 1101 is red in the coloring χ.)

Prove: For every 2-coloring χ of {0, 1}∗ and every (infinite) binary sequence s ∈ {0, 1}∞, there is a sequence

w₀,w₁,w₂,···

of strings wₙ ∈ {0, 1}∗ such that

(i) s = w₀w₁w₂ ···, and

(ii) w₁, w₂, w₃, · · · are all the same color. (The string w₀ may or may not be this color.)


Let us say that an index i is bad, if every finite subsequence of s starting at i is red (i.e. for every j ≥ i we have χ(s_i ... s_j) = red). Two cases:

Case 1: there are infinitely many bad indices. Here we go to the first bad index then the second, and so on. The colour of w₀ does not matter, and since subsequent words start at a bad index, they will all be red.

Case 2: there are finitely many bad indices. Then there is some k which is larger than all bad indices. We start by going to k (again, the colour of w₀ does not matter). Since k is not bad, there must be some blue word starting at k. We take that one and move to a larger index. Again, that index is not bad. We repeat this process to find our sequence.


Nice proof, similar to the one I posted just now but simpler -- you realised that we only need one special category ("bad" rather than "hard-red" + "hard-blue"). Gonna leave mine up though :)


Very nice!


I don't feel like doing a formal proof, but it looks like it should be a direct consequence of the pumping lemma: "Informally, it says that all sufficiently long strings in a regular language may be pumped—that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language." [1]

Needless to say, this exercise would be trivial if you just covered the pumping lemma and its applications in class, and next to impossible if you never heard of it.

[1] https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...

PS. I took 15-251 back when it was 15-299: a brand new class without a regular number assignment. Honestly, I would have enjoyed it a lot more now than I did back then. But several assignments still stand out for me, in particular "Building from scratch" [2]. Trying to get some of that feeling now, working through Turing Complete[3] with my daughter.

[2] https://www.cs.cmu.edu/afs/cs/academic/class/15299/handouts/...

[3] https://turingcomplete.game/


The problem is that 2-colorable is equivalent not to a regular language but an arbitrary language.


My first idea is to think of this as the following game. Initially we consider all words uncolored and I start by picking an initial set of red words that can color any infinite binary sequence in red. Then you get to take away one of my words and make it blue. After that I get to pick replacement words from the remaining uncolored words to fix my set. Rinse and repeat.

I start with { 0, 1 } which will obviously do. You take - I guess without loss of generality - my 0. I pick 00 and 01 as replacements and end up with { 00, 01, 1 } which will work again. You take away my 01, I replace it with 010 and 011 and { 00, 010, 011, 1 } will still work, I guess, but it is certainly becoming less obvious. In general I will pick x0 and x1 as replacements if you take x away which will allow me to still color x, I just have to choose between x0 and x1 depending on the following bit.

If I am not mistaken, you will have to take away an infinite set of words from me in order to prevent me from being able to color any infinite binary sequence in red, if you take only finitely many, I will be left with a set of red words that still works. My guess is now that if you take away that infinite set in order to stop me, you will accidentally assemble a set of blue words that will be able to color any infinite binary sequence blue. Unfortunately I do not have the time right now to properly think this through.

I am also not too confident because I do not clearly see how being allowed to have the first word in the wrong color comes into play. On the other hand, maybe, maybe you need my 1 for sequences starting with one or something like that.

EDIT: Just had an additional thought, this might actually work. If you take my 0, then you can not also take my 1 later on or you will end up with { 0, 1 }. If you take 01, then you can not also take 10 later. So you always have to take one of my two latest replacements and leave the other one to me forever. We will build complementary sets, you have 0, I have 1, you have 01, I have 10, you maybe 011, I 010, ... Now it seems quite plausible that this will work out and we end up with two complementary set that both can color all infinite binary sequences in [mostly] a single color.


The flaw with the final conclusion is this counterexample:

0, 00, 000, 0000, … are all mapped to red 1, 11, 111, … are all mapped to blue

Neither set can construct all infinite binary sequences because red cannot construct infinite 1s, blue cannot construct infinite 0s.


I have thought a bit more about this since. Arrange all words in a rooted binary tree with the empty word Ɛ at the root and each vertex w having color χ(w) and children w0 and w1. Each cut [1] through the tree yields a set of words that can be used to cover all infinite binary sequences. Therefore, if there is a monochromatic cut, then we are done. To obstruct a monochromatic cut of color A, there must be a monochromatic obstructing path of color B from the root all the way down to infinity. To obstruct red and blue cuts, we need a red and a blue obstructing path, and you gave an example. So additional work is required to show that the task is also possible if the coloring contains a red and a blue cut-obstructing path.

[1] With cut I mean a set of vertices where no vertex in the cut is a descendant of any other vertex in the cut. Informally, repeatedly pick a vertex in the tree and remove the subtrees rooted at its children, the leaves of the remaining tree are the cut.


Interesting problem! I've never formally studied computer science, and my math degree is far behind me, so I probably won't be the one to help you solve this, but I'm commenting to at least remind myself to check back here.

Some thoughts that immediately spring to mind (apologies if you've already thought of these - just getting the brain flowing!):

* We only need consider colorings for which 0 and 1 are differently coloured. If 0 and 1 are the same colour, then we're trivially done - let w1, w2, w3, etc. all have length 1

* this smells like it'll be some sort of combination of induction and contradiction? E.g. "assume this property is true for w1w2w3...wn-1, and that wlog they are red. Assume there is no wn that can continue the sequence of red substrings - thus, all strings starting from the end of wn are blue. If that's the case _and_ if we can convert the string [w1...wn-1] into blue substrings, then we're done. So show that there is some contradiction proving that impossible (maybe using the observation I made above that we only care about cases where 1 and 0 are different colours)"


I suspect that if you have a sequence w_0, ... w_n with the property but all possible remaining words in your sequence following w_n are the wrong color, then you can use set w_0' = concat(w_0, ... w_n) and declare victory.


You can't induct for this problem because induction will only show a property for all finite strings. Arbitrary-length finite strings, but still finite.


The following proof is much tighter than the meandering process I followed to get to it. Let me know if the meandering would be interesting!

Call a position i in s "hard-red" (respectively, "hard-blue") if every positive-length substring of s beginning at i is red (respectively, blue); otherwise call i "soft". A position is "hard" if it is hard-red or hard-blue.

There are 3 possible cases:

1. s has no hard positions.

2. s has a positive but finite number of hard positions, with the last being i.

3. s has an infinite number of hard positions.

Case 1

Every position is soft, meaning that if we start a substring of s at that position and continue to grow it by appending digits from s, eventually (after a finite number of steps) the colour of the substring will change. So we can set w₀ arbitrarily to the first digit of s, then grow w₁ from position 2 of s until it has the same colour as w₀. Then we can grow w₂ from the next digit in s until it has the same colour, and so on. This results in all words having the same colour, including the first.

Case 2

Set w₀ to the first i-1 digits of s, and w₁ to the i-th digit of s. All positions > i are soft, meaning that, as for case 1, we can repeatedly grow substrings by appending digits from s until the substring turns the same colour as w₁.

Case 3

Since s has an infinite number of hard positions, it must have an infinite number of hard-red positions, an infinite number of hard-blue positions, or both. Suppose w.l.o.g. that it has an infinite number of hard-red positions (it may or may not also have an infinite number of hard-blue positions). Define p(k) to be the k-th hard-red position in s. Set w₀ to the first p(1)-1 digits of s, and for k >= 1 set word w_k to the substring of s beginning at p(k) and ending at p(k+1)-1. w₁, w₂, w₃, ... all begin at hard-red positions, so are all red. □


The cardinality of the set if possible infinite binaries sequences is uncountable. My gut tells me there must be a way to assume the negation and construct a bijection between the naturals, leading to a contradiction.


I know that I have no chance in real computer science department. but is this question for freshman?


This would be an upper level undergraduate class, probably juniors and seniors.

You would have taken courses in discrete mathematics and data structures and algorithms as prerequisites, and have familiarity with proofs and some degree of mathematical maturity.

You build yourself up to a class like this by taking all the prereqs, same as any other high school graduate.


> I know that I have no chance in real computer science department.

Don't say/believe that and limit yourself. You just need to find the right books and slowly educate yourself (never mind what others say). I have spent a lot of time collecting and reading books much of which i still don't "grok fully" but what i do understand is intellectually very stimulating and gives me an edge over the competition (when needed in the industry).


thanks for your friendly response haha.


Life can be much broader once you discover one simple fact. And that is, everything around you that you call life was made up by people no smarter than you (Steve Jobs) - https://www.youtube.com/watch?v=kYfNvmF0Bqw

What one man can invent, another can discover. - Sir Arthur Conan Doyle (via Sherlock Holmes).

There is no prison as strong and unbreakable as the mental prison you choose to build and stay in - Me :-)


Can this be solved with a proof by contradiction? Let's assume that this is impossible for our sequence. There must then be at least one (sub)word of the wrong color. For this to be fatal, it must be impossible for any word containing it to be of the correct color while having the rest of the sequence (after and before that word) continue to be the same color, no matter how large we make this word to the right, because if this wasn't true, we could then simply use this bigger word and there would be a contradiction.

If this is true for more than one word, the problem can be solved by simply inverting the chosen color and using two larger words containing both subwords, which will always be of the same, originally wrong color, which leads to a contradiction.

Otherwise, if this is only true for one subword, and therefore the initial sequence and the rest of the sequence around cannot be made to be worded correctly while the subword is contained in a word of the correct color, but that the rest of the sequence can be covered with words of the same color, we can simply include this problematic subword inside w₀

In either of the three cases 0, 1 or 2+ "toxic subwords", it is always possible to find words of the same color covering the sequence. Therefore, there can be no sequence for which it is impossible to find a suitable w₀w₁w₂ ··, and the original proposition is proven.

Please tell me if you find any issue with this approach!


Nice!

If folks would like to learn these ideas by hand via programming, i highly recommend Tom Stuart's Understanding Computation From Simple Machines to Impossible Programs - https://computationbook.com/


My favourite computing book.

Very highly recommended.


Do you have any other favorite recommendations?


Yeah, I’ve also enjoyed greatly the following:

* Computer Science, an interdisciplinary perspective by Robert Sedgewick * Code by Charles Petzold * Good Math by Mark C. Chu-Carroll


Wow, thanks for sharing!


You might also like Algorithmics The Spirit of Computing by the great David Harel (https://en.wikipedia.org/wiki/David_Harel). The book is accessible to the intelligent general reader (though not a popular science book) and covers its domain rather comprehensively.


I will definitely give it a try.

Genuine thanks again!



Thanks. To me the first part linked above is rather "foundation of computer science". I remember this to be content of my first classes in CS ( turing machines, lambda calculus, finite automata, ... (Actually the course in 1999 in Uni Karlsruhe was unfortunately also was the last time I implemented things using Haskell).

This full list I guess goes beyond and looks interesting.


I don't know where, but there is another version of this class that includes "The Probabilistic Method"[1]. While not exactly the same thing, I can't imagine doing modern existential (vs constructive) proofs of topological solution space obstructions without that style of thinking.

[1]: https://math.bme.hu/~gabor/oktatas/SztoM/AlonSpencer.ProbMet...

Bonus points for those interested -- I think this paper(sadly, not mine) has a concise collection / history of obstruction proofs in its background section (1.2/1.3 and since it is a paper all of them are cited!): https://arxiv.org/abs/2309.09913


You may be referring to the version of this course that was taught by Luis von Ahn (the creator of reCAPTCHA and the founder of Duo Lingo), when he was a professor at CMU.

https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15251-s...

He liked to call the course "SOME Theoretical Ideas FOR Computer Science", and it was known to be a very popular (and difficult) course.


I didn't know LVA taught at CMU, rad!

I did some digging and found the class: https://people.eecs.berkeley.edu/~venkatg/teaching/15252-sp2...

I enjoy that it is called "More Great Ideas in Theoretical Computer Science", like a sequel (...and 'some' being the prequel). I'm now eagerly awaiting the 'return of the jedi' installment.


Ah, great! Seems like there was a tradition of having fun with the course title.


Damn. Impressed to see something this fundamental that never made it into my computer science postgrad education (and judging by the depths and difficulty of it, might not for a while!) Seems like a powerful technique that scales to cover quite a lot of other problems.


Wow I'm really surprised this didn't enter your post grad! We covered this briefly in undergrad but had a much meatier grad level course on fundamentals of computational with this included and it destroyed pretty much every student in the class lol. Our professor ended the final at 4 hours to "spare you the misery".


hahaha okay actually in retrospect/closer inspection, I do think some teachers tried to instill these probability proofs from a different angle - though obviously it did not stick long past the testing, and I never thought of it as a fundamental new approach. Would definitely need to reformulate all of this into much more intuitive / visual metaphors to get it to really stick these days. Oddly I do a lot of thinking on the general fundamentals course content these days (FSAs, Turing Machines, etc) but I should really try to incorporate this probabilistic view


If you dig this you might like https://cs121.boazbarak.org/, which also has a link to Barak’s textbook, available as a PDF.


What would versions of this course look like for other fields?

Great Ideas in Theoretical Physics?

Great Ideas in Experimental Physics?

Great Ideas in Economics?

etc

I did teach a course once called, Inventing the Information Age, in which we discussed all the inventions and ideas (starting from writing) all the way to modern computing infrastructure needed for a civilization to replicate our information age. This was not a single-field course, because the ideas/inventions were in language, physics, mathematics, and computer science. That made it more fun.


Sean Carrol has a series he’s working on called “The Biggest Ideas in the Universe” which is on physics. Only the first one is out now (Time, Space, and Motion), but the second is due in May I think (on Quanta and Fields). Its intended audience is anyone with a high school education. It introduces basic ideas in calculus and then does use equations to motivate understanding, but not at the level you’d need if you were actually studying physics. I have a bachelor’s in physics but still deepened my understanding of General Relativity from the first one. Highly looking forward to part 2. https://www.preposterousuniverse.com/biggestideas/


Nice, this was the push I needed to start reading his book. I really enjoy his podcast and was on the fence about taking it on (thinking I don’t have the brain cells required to soak in the book) but the way you described it makes it seem like a fairly tractable project.


The CMU class is for freshmen, so students without advanced background can still comprehend most of it. But for physics I feel that without at least some good physics and math background, you can't really understand and appreciate the "great ideas".


If the students know the core concepts and mechanics of Calculus and Linear Algebra, which is entirely possible for high school students in certain parts of the world, it's possible to teach a lot of core physics ideas with simple models.

In fact, in my undergrad, the Physics lab and theory courses were flipped - you look the lab course before the theory course. In the lab course you used simple maths and simple experimental setups to probe phenomena in mechanics, thermodynamics, electromagnetism, optics etc. Doing experiments/observations in astronomy, quantum mechanics are also possible at that level.


That sounds awesome and very creative. Do you have a link to an old class site, or a syllabus, or anything that would give a flavor?


I taught the course during the initial part of Covid, so the material ended up not being organized - split between different systems. But here are some of the books/papers that I sourced class readings from

* Brian Winston - Media technology and society a history: from the telegraph to the Internet, Routledge (1998)

* James Gleick - The Information A History, a Theory, a Flood (2011, Pantheo)

* Michael G. Raymer - The Silicon Web: Physics for the Internet Age, CRC Press (2009)

* Simon Singh - The Code Book: How to Make It, Break It, Hack It, Crack It (2003)

https://www.jstor.org/stable/43737424


Your course sounds awesome. Do you have the materials somewhere available? Would love to read more about it.



I TA'd for this course back in undergrad. Despite being in the CS department, it's structured like a math class, and all of the homework assignments are proof-based. I don't think I wrote code for that course a single time, except for pseudo-code for a Turing machine


Computation is a fundamental propery of the universe? How is that? How do flora and fauna compute?


You have to have right amounts of inputs to get outputs.

Plants I grow are complex function of CO2, water, energy and bunch of other things things. But I do know if I don’t water enough or too much, plant function will return not good results.


You might find your answers in;

1) The Nature of Code by Daniel Shiffman - https://natureofcode.com/

2) The Computational Beauty of Nature by Gary William Flake - https://mitpress.mit.edu/9780262561273/the-computational-bea...


Source code for the second book:

https://github.com/gwf/CBofN

Use DieHard to avoid crashing newer window managers such as CWM:

https://github.com/emeryberger/DieHard


With ribosomes, hormones, spores, senses, brains etc., and by processing our genetic "code" into these things.


Uh, let's take an example: Mars gets a force from the gravity of the earth. That force has magnitude and direction so is a vector. Mars also gets such a vector from each of the Sun, Jupiter, Andromeda, dark matter, .... Then to determine the direction Mars will go, have to add all those vectors with the current value of the Mars momentum vector. Soooo. the universe adds vectors, that is, does computation. Done?


This makes me wonder something. When I’m computing values in code I’ve written, the inputs and outputs are very quantifiable and the steps between them are too. There is a start and end to every part of the process.

In the universe, how are the inputs and outputs divided into discreet units and states in which computation can occur? I know the universe isn’t running on a chip (wait… No I don’t), but what I’m unsure of here is how, or even if, the state of things in the universe is ever determined in order to be “computed” such that it can transition to the next state in which it has been acted upon again by the inputs around it.

Perhaps it’s the wrong analogy. I don’t understand though how computations can work with a fluid rather than clearly sliceable and unitized parameters you find in current computing technologies. Or is the universe actually able to broken down into frames of time?

I suppose if you don’t think you need to compute in discreet steps then the question doesn’t make sense. It’s taking the analogy too literally. I struggle not to, though. What is computation if it can’t be proven or reversible at any stage?


A related concern, issue: Write a program to simulate the whole universe in full and fine detail.

We would likely do this work in time steps, i.e., some version of discrete time.

So, in the simulation there would be a timing, time synchronization, etc. issue: In simple terms would have to update the whole universe, the whole state, FULLY at each single step before could start the next step.

Or if don't write just one program, and instead have some string theory where everything is made of elementary strings, each string has its own clock, supply of energy, momentum, charge, mass, etc. That's a lot of clocks and makes the strings not very elementary.


This system would also need to account for special relativity. Is that possible if using discreet time? Maybe it is if you can eventually reconcile all frames of reference… But I’m not sure how you’d compute all frames of reference at once since they… Well, kind of happen on different timelines, so to speak. Though I could be misunderstanding how special relativity works in relation to some kind of universal state tracking system which uses discreet time units as its timeline.


Inspired by the title my two top ideas from theoretical compsci:

1. Move to front lists are at most 2x the optimal list order search time (and are often much better than any static list order). [1]. A similar conjecture exists for Splay trees but is unproven called the dynamic optimality conjecture.

2. Randomized algorithms (e.g quicksort) often have the same worst case time as a non-randomized one but often are much faster in practice than their non-randomized variants.

1: https://www.dgp.toronto.edu/public_user/JamesStewart/378note...


Ahhh, I remember my old college course on time complexity, which was effectively the professor picking on random kids and saying something like - "I have some 4 dimensional matrix multiplication algorithm which then uses a bloom filters to match on random ids... what is the time complexity of this?", and some poor kid would stutter something like, "aaaaahhhh.... O(nlogn)", only to have the professor to shout "No, wrong!" and go pick on someone else.


Interesting positioning as "great ideas" -- it seems like the topics are a pretty standard undergrad Theory of Computer Science course.


I wouldn't expect the theory of a field to be built upon Bad Ideas!

It is a strange course though. It is grad level (2xx at Harvard) but the syllabus is survey topics from a range of undergrad courses in CS. I would guess it's a smattering of advanced topics in those areas, but the description doesn't sell it as having all those topics as prereqs. Maybes it's meant as an extension school / professional master's class for people who do not have CS degrees? Refugees from math, physics, and autodidacts?


It's a second semester freshman course for CMU CS undergrads.


That’s my impression too, maybe the name just gets undergrads more excited.



And where is Category Theory and Lambda Calculus?


lambda calculus is one way to describe computable functions. this course chose to focus on Turing machines.

category theory is historically not important to theory of computation, and it has questionable use if you aren't familiar with abstract algebra


I have nothing to add other than saying that Physicists and Philosophers often make better programmers than Computer Scientists. :)


Sometimes you find the truth at the bottom of HN. It hurts, I spend my early adulthood trying to learning most of the topics on this page thinking it would make me a great software engineer. These are good ideas, but I wouldn't call them great, they are often very dull and don't make one a great thinker.


Not really on the topic of this link at all. But I wouldn't be surprised, since you're likely comparing the few physicists and philosophers who switch fields against the average computer scientist.


> Garbage collection This is a huge Yes now. Every language that's become successful since 1999 has GC and is designed for all normal users to use it to manage all memory. In five years, Rust or D might make that last sentence untrue, but even if that happens, GC will still be in the yes category.

Sorry, I had to guffaw when I read that last part about D. A dumpster fire of a language for garbage collection.


"string theory"

i found this stuff to be a lot of fun!


The most painful class I ever took. I know, I'm not a real nerd, or something. I'm okay with it.


Interesting to read your experiences. Looking over this, it seems to take a highly theoretical approach and work up, rather than a practical/applied approach and work down. I was surprised to see automata covered so early, which is more in the vein of computability theory than that of introductory concepts.


There are literally dozens of us making great careers solving problems with computers without CS degrees or any particular interest in the lowest levels of computing nor the mathematical backings / abstractions.

Horses for courses.


You might find this motivating : https://news.ycombinator.com/item?id=39721301



I think there should be a companion class on anti patterns or bad ideas in computer science. It’s much harder to see a horrible idea and then you have to argue why not to do something.

Something like variable names being too long or short. How to figure out what to do with unrealistic timelines.


"Theoretical computer science" is garbage.

Of all the scientific fields, computer science is the one that is mostly closely related to fields where theory is most worthless. If you have a computer science theory, then write some damn code to prove it. Then share your code. Others can then run it and replicate those results. At the "theory" phase, the idea is worthless.


You are very wrong.

Many of the key results of theoretical CS is to prove impossibility. There is no code to be written when you show that something is not possible.

- Halting problem: impossibility of TM to determine if other TMs halt. - crypto: impossibility for a Turing machine to break a cryposystem in polytime - sorting lower bounds: impossibility to sort objects given only a less_than operator on them in time less than O(n log n)

and so on. There is no code to be written for these, because they are mathematical theorems.

> computer science is about computers as much as astronomy is about telescopes ~Dikjstra.


Wow, just... wow.

Theoretical computer science is literally more solid and real than any code you could write. It's literally the mathematical foundation of how your favorite language works. It is all precise mathematical proofs, not just 'thoughts'.


What are mathematical proofs but mere 'thoughts'?

(I'm only being half rhetorical. I've been thinking about this deeply lately.)


The relationship between thoughts and proofs is analogous to the relationship between noise and music.

It’s a subset kind of relationship.

There is also the social aspect: people have to agree on it. Noise becomes music, and thoughts become proofs, only if several people agree.



You are either trolling or are absolutely clueless.

However in the event that you really want to learn this via programming see this book - https://news.ycombinator.com/item?id=39721301




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: