Hacker News new | past | comments | ask | show | jobs | submit login

When I was in college, one CS professor explained the difficulty of coding to me in terms of discreteness vs continuity. In the real world, things are continuous. If you accidentally build your structure with 9 supports instead of 10, then you only lose 10% of the strength of the structure, more or less. The strength varies continuously with the amount of support. But if you're writing a 10-line program and you forget one of the lines (or even one character), the program isn't 10% wrong, it's 100% wrong. (For example, instead of compiling and running correctly, it doesn't compile at all. Completely different results.)

Of course this logic doesn't hold up all the time. Sometimes you can remove a critical support and collapse a structure, and sometimes removing a line of code has little to no effect, but the point is that in programming, a small change can have an unboundedly large effect, which is the definition of discontinuity.

(I believe it was this professor, who was my teacher for discrete math: http://www.cs.virginia.edu/~jck/ )




Wahoowa! This also explains the popularity of interactive platforms like Codecademy/CodeSchool/Treehouse, etc. Ton of handholding and pre-filled syntax. We describe them to our students as "coding on training wheels."

Cathy, a new frontend student, experienced similar struggles when starting her first real project - a simple HTML/CSS resume. She spent countless hours fixing minor typos and almost quit. It wasn't until she was reassured that this was _normal_ and "real" programming was much different from codecademy did she feel like she was truly learning. She wrote about her first month (with similar highs and lows as OP's visual) in this post: http://blog.thinkful.com/post/98829096308/my-first-month-cod....

side note: Erik (OP) is an incredible guy, we had the pleasure to share our experiences in Edtech and it's obvious that he truly cares about student outcomes.


> She spent countless hours fixing minor typos and almost quit.

it's interesting anecdote!

When i was at uni, i noticed that a large number of "beginners" tend to fall into this category too - frustrated at minor typos/language idiosyncrasies. Those who had the mental strength to endure it end up passing the class, while those that just gave up will almost inevitably change their major (and thus, stop programming i guess?).

But i think this is a result of poor educational methods, not because of an inherent property in programming.


With respect to your CS prof at one of my favorite places (Go Cavaliers!), this is not an issue of discrete vs. continuous. If your structure can have either 9 or 10, but not 9.001, supports, it is discrete, not continuous, regardless of failure mode. And remove one of the 3 legs of your stool, and you probably wouldn't have 2/3rds of its support remaining, but whether you did or didn't would be an issue of proportionality, not continuity.

There are a lot of jumbled concepts here, most of which don't matter anyway, because what you are talking about is the phenomenon of graceful degradation. In the physical world, both natural and man-made, almost nothing at the macro scale is ever perfect, so the best designs tend to be those that remain good enough under the widest range of circumstances vs. a more common software goal of being perfect under perfectly controlled circumstances.

As software gradually moves out from the wall garden of a single mainframe to fill the world with interacting systems spanning diverse machines, sensors, communications channels, data types, etc., design for graceful degradation becomes more and more of a focus for professional software architects.

Coding in the gracefully degrading way is much harder than coding in the "if even one of your ten lines is wrong, you crash" tradition. The fact that even the latter is so hard for us humans means we will need more and more help from machines that learn what to do without being explicitly told by us.


I agree that "discrete vs continuous" is not the perfect way of expressing the difference; it's just an analogy. (But the structural support example is continuous. You could have 9.5 supports by adding a 10th support with half the strength, etc. "Amount of support" is the continuous measure.)

But it's not just an issue of graceful degradation. The fact that tiny changes in a program can have very large effects is a feature, not a bug. We grade programming languages on their ability to concisely express complex operations, and that conciseness necessarily means that very different operations are going to have similar expressions (e.g. subsetting rows vs columns of a matrix typically differ only by a small transposition of characters, but the effect is completely different).

You can write software that degrades gracefully, but one syntax error (or other "off-by-one-character" problem) is still going to kill the program. You can talk about running your program on a large set of redundant servers with no single point of failure, so that you can update them one-by-one with no downtime, and that makes you robust against even syntax errors. But that's not helping you teach novices how to write code.


there are continuous programming languages out there - DNA is one such one i guess. But i don't think the discrete vs continuous nature of a programming language is what makes it difficult. It's more that a person's mind may not conceptualize tasks algorithmically, and to switch to this frame of mind is difficult for someone who isn't already in this frame of mind.


That's a good point. DNA as a programming language has to be at least somewhat continuous, or else evolution has nothing to optimize because every change has a random effect.


DNA is discrete. It can be precisely represented symbolically.


DNA is a lot less discrete that you might think. There's epigenetic factors and population proportions, for example.

But even considering DNA as just a 4-letter language with discrete characters, my point is that many, even most, small sequence changes to a genome (e.g. single-nucleotide variants) have small effects or no effect at all, which gives evolution a smooth enough gradient to optimize things over time. That's what I mean by continuous in this context. The opposite would be, for example, a hash function, where any change, no matter how small, completely changes the output. Hence you couldn't "evolve" a string with a hash of all 7s by selecting for "larger proportion of 7s in the hash function", because hash functions are completely discontinuous by design. But you can evolve a bacterium that includes more of a given amino acid in its proteins by selecting for "larger proportion of that amino acid in protein extract".


Unless you're doing Rails, in which case it'll be read as a magic method and guess what you meant :-P

Seriously, that was a major sticking point for me having programmed for a long time: going from "if you have not declared that identifier, game over" to "magic happens".


I've been teaching myself off of online resources and 'magic' was what I hated most along the way. I can't debug magic. I've ended up digging so deep to understand things that I'm covering assembly now. It's painful, but going so far has made everything else make a lot more sense. Data structures are easier to conceptualize and will be easier to work with (for example).

But most people I know don't get this far when they self teach or do a bootcamp. They just know that, given a framework, they can build things but not how anything was really built. Sure it's effective to push out a product, but it makes diving into real programming pretty difficult. That's just my perception though.


You need to keep in mind that "magic" is only sufficiently "advanced" (or rather obfuscated) technology, but omeone somewhere said something along the line of "Things that work as if by magic also break as if by magic" .. I've been looking for the reference ever since.


Nicely put. Not enough people dive to the lower levels.


We should start at the lower levels. It is all easy to understand if you build up from first principals: Binary, logic gates, cpus and assembly, then it splits with compilers on one branch and LISP and Smalltalk on two others and a bunch of brush and shrubs and time to retire that metaphor.

Unlike climbing Everest or understanding how the human body works the challenges in learning to program are entirely man-made! (Except for recursion, of course.)


I'm in a similar boat. Started with the training wheels of Codeacademy and such after having a basic working knowledge of HTML/CSS/JS and wanting to build database driven projects and grow my skillset.

Picked up RoR and was quickly overwhelmed with a million unfamiliar concepts as pointed out in the excellent (and very similar) article "This is Why Learning Rails is Hard."[1]. That knowledge tree they show is one I didn't formally stumble upon until later, but throughout my progress I realized "hey, this concept is really part of the much broader topic of X." Then I'd go down a rabbit hole on X.

Before I found that tree though, I had already given up on Mike Hartl's tutorial once, and decided I really needed to have a functional grasp of Ruby and core programming concepts. From there I realized "Ruby/RoR on Windows is not ideal." Then went down the whole devops path and learned about things like Vagrant/Chef/Virtualbox, etc.

I also started picking up books on much deeper computing concepts to understand the lower level mechanics of the magic. Like you I went down to first principles and even a bit of assembly. I couldn't write any to save my life and knowledge is still fuzzy, but I now grasp how the concept of data structures came to be, and more importantly WHY.

I recently tackled Mike Hartl's Rails Tutorial again. His updated version is a great improvement, and this time I actually understand the concepts he goes through. When a new one is introduced, I have enough of the underlying knowledge to at least have a sense of what/why something is, or what I need to Google to learn more.

I wish more classes online provided "deep dive" resources/links on things. Like, if CodeAcademy has an exercise on Ruby covering types, an eager student might really benefit from a deep dive sidetracking into dynamic vs. statically typed languages, and a high-level of what they should know.

My biggest gripe with the tutorials that are out there these days is that they cater to either absolute beginners or competent users. Wish they did a better job of trying to bridge the gap from absolute beginner to intermediate.

Another great example of that is the concept of design patterns. I haven't found many great beginner/intermediate resources on this, but as I've started learning more, I found myself saying "hmm, seems like lots of people do this a similar way--I wonder why." Turns out some approaches to problems are largely solved issues for a majority of use cases, hence: design patterns. This got me down the whole path of software architecture and starting to grok some of the higher-level abstractions and way of thinking in the abstract which was tremendously helpful compared to just being given specific examples with no broader context.

[1] https://www.codefellows.org/blog/this-is-why-learning-rails-...


That is absolutely something that irritates me. I've just inherited a large RoR application, and the amount of "magic" and things by convention is driving me crazy. There should be answers to questions like "why is this the way it is?!"

On a side note, if anyone has some great resources for RoR, I'd love to have them linked. I suspect my inexperience is the source of my problems, and I'm welcome to any assistance any one would like to give.


The guides on the ror website are pretty good:

http://guides.rubyonrails.org/

Which bits did you find were magic? The bits of convention I can think of you'd have to know about are:

DB naming conventions - these are used so that it can do joins etc easily behind the scenes, they're pretty simple so not a huge problem I find.

Rendering at the end of controller actions - it'll render the template with the same path as your route - again relatively straightforward.

Class loading - lots of things are loaded at startup time, so that you don't have to include files - I have mixed feelings about this, it feels easy and simple at first, but could leave you unsure where code comes from or which methods you can use in which files (e.g. view helpers). Definitely more magic.

One other area which does lead to real problems is that rails sites often use a lot of libraries in the form of gems - this leads to unknown, sometimes poorly maintained or inappropriate code being pulled in at runtime, and makes it far harder to reason about things like say authentication if using a gem. This is my biggest complaint with rails - lack of transparency of code paths when using gems like devise, paperclip etc but it is unfortunately quite common in web frameworks

They actually got rid of quite a few bits of method_missing madness I think recently so that magic is gone at least (all those magic find_by_ methods are deprecated or removed, not sure which as I never used them). I haven't found the conventions get in the way much as it's something you learn once and can apply anywhere, but completely understand why someone might object to some of the magic setup for helpers/rendering.


The routing system and associated view helpers can really get confusing.

For example:

    link_to @story.title, @story
You have to know that rails has some automatic routing based on the class of an object. If @story is a Story class, rails basically does this underneath:

    link_to @story.title, send("#{@story.class.name.downcase}_path".to_sym, @story.to_param)
There's implicit conversion of class names going on under the hood in a few places. It's all documented but it's not easy to find the documentation when you don't know what you are looking for.

The thing that really screws up people starting with rails is not understanding the various layers (html, views, controllers, models, http, etc.) and how rails puts those together. If you don't know how to do web programming with basic html and php, rails will eat you alive with it's seemingly magical behaviors.


I have to agree - The path helpers are very opaque. It would probably do Rails well to generate a app/helpers/path_helper.rb file with the actual implementations in them.


`rake routes` will output the routes and paths. Appending _path or _url to the path will generate appropriate methods.


Sure. But that still doesn't tell me exactly which arguments they take. Or give me an opportunity to debug the code when it doesn't do as I expect. I realise that it just-works (tm). It's when it doesn't, it gets problematic.


It's actually pretty easy to tell the parameters they take if you look at the url for the route.

For:

    story GET   /story/:id(.:format)  story#show
You get

    story_path(id)
    story_path(id, format)
    story_url(id)
    story_url(id, format)
In practice it doesn't cause as many problems as you think, even in large applications.


When learning Rails, I found the DB naming conventions confusing enough that I wrote a blog post summarizing how everything is supposed to be named when I figured it out, since nobody else seems to have:

https://shinynuggetsofcode.wordpress.com/2013/09/30/conventi...

Like a lot of the Rails stuff, it feels like amazing cool magic when things just work. But then when they don't work and do something weird instead of what you expected, it feels like it takes forever to figure out why, what was named wrong, and what it's supposed to be named.


Convention over configuration is awesome, if you know the conventions. If you don't, it's all magic. At least with configurations, you can read them and get some pointers.


Aside from the things others have mentioned, which are all really good, there are some really good books on the subject.

Jose Valim's Crafting Rails Applications[1] is a wonderful resource, since it deliberately sets out to peel back the layers of magic. A lot of the techniques are ones I probably would not use in practice (storing views in the database and rendering them!), but they serve to elucidate the operation of the entire view stack. Really good stuff.

Two other good books are Rails Antipatterns[2] and Objects on Rails[3]. Neither of them has been updated in a long time, but the general principles will still hold. The former is more practical, the latter more theoretical; precscriptive and fanciful food for thought, respectively. Both solid.

1. https://pragprog.com/book/jvrails2/crafting-rails-4-applicat...

2. http://railsantipatterns.com/

3. http://objectsonrails.com/


If you're not an experienced Rubyist, I'd recommend reading David Black's book The Well-Grounded Rubyist. Unlike many introductory books on programming languages that focus on making you productive in that language quickly, it focuses on building a deep understanding of the language. When I later read the book Metaprogramming Ruby, which uses parts of Rails for many of its examples, I already knew many of the techniques thanks to David.

http://www.manning.com/black2/


Ryan Bates' Railscasts are great. Unfortunately he stopped producing new ones, but they are still a great resource:

http://railscasts.com/


Just wondering whether a distinction should be made between learning a framework (RoR, jQuery) and a language (JavaScript, Ruby). Frameworks do magic, languages generally don't.


> But if you're writing a 10-line program and you forget one of the lines (or even one character), the program isn't 10% wrong, it's 100% wrong. (For example, instead of compiling and running correctly, it doesn't compile at all. Completely different results.)

This is where the beauty/simplicity of some programming languages, namely, intepreted languages (e.g., Python), comes in: if a bad line of code never gets executed, then the program itself will run fine. In other words, if the line is never called in the program, then you'll not know that the functionality that that line presented was bad. In this case, the analogy breaks down a bit - and also shows why certain languages are easier to learn than others (e.g., Python vs. C++).


This is a risk that you should be aware of when using languages like this and thus use them appropriately. To continue with the building analogy, you don't want it to take an actual fire to learn that all your fire exits are dead-ends.


I'd rather know when something is incorrect, rather than pushing to production and finding out later because someone else took that code path.


If you rely on your compiler to tell you that your code is correct, there are whole classes of bugs that are waiting to surprise you in production.

I think that many years of developing large applications in Perl were really good for me. Perl is compiled when you run it, so you get the basic-syntax check that you get with other languages. But it's also very lenient, so you learn through experience to get your logic right, test return values, and do all of the things that help make sure that a program which executes is executing correctly.


There is a difference from RELY on compile to catch 100% of bugs, vs having an awesome type system that can take whole CLASSES of bugs and make them impossible to get past a compile.

Is a statically typed language more likely than a dynamic language to work correctly in production, if both have 0 tests? Yes.

Is either ideal? No.

Can both be improved by adding a few tests? Yes.


> Is a statically typed language more likely than a dynamic language to work correctly in production, if both have 0 tests? Yes.

I'm not sure I agree. "Work correctly" does not just mean "compile correctly". I would want to see a lot of evidence to back up any assertion that programs written in statically typed languages are less likely to contain logic errors that compile and run just fine but don't do what the programmer (or his client) actually wanted.

I agree that neither is ideal and that adding testing can improve any code.


> I would want to see a lot of evidence to back up any assertion that programs written in statically typed languages are less likely to contain logic errors that compile and run just fine but don't do what the programmer (or his client) actually wanted.

As certain assertions related to logic can be encoded into static types (especially in a language with a type system more like Haskell's than, say, Go's), while static typing can't eliminate all logic errors, it can reduce the probability of logic errors escaping detection in the absence of testing, since compiling a statically typed program is, in effect, a form of testing (limited to those assertions about behavior which can be encoded into the type system.)


> compiling a statically typed program is, in effect, a form of testing (limited to those assertions about behavior which can be encoded into the type system.)

Fair point. (Especially if, as you say, you are using a language with a type system like Haskell's, which to me is more like a program analysis engine than just a type system.)


I agree with you on all points, but the parent sounded like he was relying on the compile-time checks to determine correctness. I was making the point that that is a bad idea.


> This is where the beauty/simplicity of some programming languages, namely, intepreted languages (e.g., Python), comes in: if a bad line of code never gets executed, then the program itself will run fine. In other words, if the line is never called in the program, then you'll not know that the functionality that that line presented was bad. In this case, the analogy breaks down a bit - and also shows why certain languages are easier to learn than others (e.g., Python vs. C++).

Actually, that's one of the pitfalls of interpreted languages.

You want to Crash Early & Crash Often [1] or you'll move along, merrily ignorant of a serious problem just because it doesn't get executed.

I try to solve this shortcoming of languages like Python with proper unit testing. It gives me the confidence that there's a decent coverage of the different code paths so that I won't learn about the problem in production.

[1] - https://pragprog.com/the-pragmatic-programmer/extracts/tips


>This is where the beauty/simplicity of some programming languages, namely, intepreted languages (e.g., Python), comes in: if a bad line of code never gets executed, then the program itself will run fine.

That's not beautiful; that's horrendous. A program that might contain syntactic (!) errors has no claim to being a sublime mathematical construct.

I'd say it's "beautifully simple" when I can tell you with 100% confidence that my program will never, ever, ever experience errors of a certain type. Even better if I can tell you with 100% confidence that my program contains no errors at all (which is possible with proof-based languages).

Saying a Python program is beautiful because it can have hidden failure conditions is like saying that a poorly maintained gun is beautiful because it can fire when rusty (but watch out for explosions!).

I wish, when learning to program, that I'd been taught to write universally correct code instead of "mostly correct" code.


Wouldn't it be better to catch an error at compile time than throw an exception or have the app completely fail when a bad piece of code is run?


Depends on if you're trying to engineer fault-tolerant, robust systems, or trying to learn how to program.


Quite right. I was only mentioning it w.r.t. actual learning, not a more general use case. :/


> This is where the beauty/simplicity of some programming languages

Simplicity? Yes, probably (at least as long as I am writing the code and not debugging it). But definitely not beauty. I find this particular behaviour the ugliest part of interpreted languages. I may make a small typo, incorrectly capitalize variable name or forget a quote and nothing will tell me that my code is wrong or where exactly it is wrong - it will silently skip the error and happily show me wrong results.


Unfortunately this is a dark hole of horrible bugs just waiting to happen not to mention the fact that interpreted languages are very liberal with silent type conversion. It is a nightmare to deal with in a large system written by careless programmers.


That's closer to the definition of instability than it is discontinuity.


"In mathematics, a continuous function is, roughly speaking, a function for which small changes in the input result in small changes in the output." [1]

I could clarify and say that the change doesn't have to be unboundedly large in absolute terms, but rather relative to the change in input. (i.e. a jump discontinuity from 0 to 1 is not absolutely unbounded, but it is relative to an arbitrarily small change across the jump.)

[1] http://en.wikipedia.org/wiki/Continuous_function


Then you might enjoy example 7 on page 18 of this document:

UNIQUE ETHICAL PROBLEMS IN INFORMATION TECHNOLOGY

By Walter Maner

http://faculty.usfsp.edu/gkearns/articles_fraud/computer_eth...


Well, that's probably where my CS prof got the example. Thanks for pointing me to the likely source!


I remember this one thing he pounded into our brains: "CS doesn't stand for Computer Science. It stands for Common Sense!" Heard ad nauseam in CS 340.


If you're lucky it won't compile. It's when it's 2% wrong that makes programming so hard.


In a general sense, it can be more difficult to reason about what effect adding or removing something will have because the skill is still being developed.


Yes, but that's true of any new skill that one might try to learn. I'm talking about specifically what makes programming harder than other things to learn. When things are continuous, you can at least experiment by making small changes and be confident that those changes will only have small effects.


i say its "brittle" ...those words discrete and continuous dont really apply cleanly, though i understand the idea somewhat intuitively


Yes, "brittle" is how it's been described to me and how I describe it to others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: