Hacker News new | past | comments | ask | show | jobs | submit login
Why learn Racket? A student's perspective (micahcantor.com)
129 points by hydroxideOH- on Feb 21, 2022 | hide | past | favorite | 52 comments



The point about simple evaluation models, while true for basic Racket, is actually the furthest from the truth in idiomatic Racket.

The idiomatic way to solve a problem in Racket is to develop new syntax (and evaluation orders) which are suited to the problem.

In fact, Racket has pythonic list comprehension syntax too:

    (for/list ([x '(2 4 8)]) (sqr x))
It also has lazy evaluation langs, static typecheck pre-eval step langs, DFA compilers, OOP...

Basically the floodgates are open, even if the river is still a bit barren. Every confusing language feature in existence can be added to Racket. The only saving grace is that anyone, including you, can replace it with less confusing syntax / evaluation orders if they so desire.

I wrote a Macro which adds identifiers to your program based on a SQLite database's column names. [0] If the database isn't found your program does not compile. How is that for confusing alternative evaluation order? If that's your nightmare, my point is made.

[0]: http://tech.perpetua.io/2022/01/generating-sqlite-bindings-w...


Developing domain-specific languages/minilanguages is claimed by many to be idiomatic for many Lisps.

And DSL is an especially strength of Racket (especially with the syntax objects, pattern-based syntax transformers, module system and submodules, and `#lang`).

But, just to avoid giving non-Racketeers the wrong impression... I don't recall anyone ever calling a piece of Racket code non-idiomatic merely because it didn't introduce a DSL or syntax extension.

In my (abandoned) Racket book, I intended to get people commercial-grade productive the first day, and introducing syntax extension wouldn't come until calendar weeks/months later. (Related: there'd be an entire chapter at the end, entitled "Don't Use `eval`", starting out like: https://lists.racket-lang.org/users/archive/2014-July/063597... :)


I've spent some time looking at Racket open source projects, and I did not come away with the impression that one must start by creating one's own #lang. There are a lot of custom DSLs out there, but it seems to me that most of them come about, not because using Racket compelled the author to create a DSL, but because creating a DSL compelled the author to use Racket. No other platform I know of (not even MPS) seems to even come close in terms of ease of use and number of batteries included.

I could admittedly be projecting here. Wanting to create a DSL is what brought me back to Racket after a long hiatus. But I don't think so. Exhibit B is that, if you go to Racket's Discord or Slack, you'll see that people are mostly discussing how to solve concrete problems with standard language features. The volume of talk about macros and #langs is much, much smaller.


Another example... this Arc code doesn't finish evaluating until a couple round trips with a web browser are finished. Absolutely cursed

    (defop said req
      (aform [w/link (pr "you said: " (arg _ "foo"))
               (pr "click here")]
        (input "foo") 
        (submit)))
(If you didn't know, Arc is implemented in Racket, and powers this site...)

http://paulgraham.com/arcchallenge.html


One thing to note: The reason why SICP uses Scheme is not that the resulting programs are simpler. But instead, the mental model of what is happening under the hood is simpler, and easy to refine as you go along. This makes it possible to come to the end of the class, ready to create your own Scheme interpreter. I don't think that's possible with any of the alternatives, like Python.

I am sure this result was rare, of course, even when it was the introductory at MIT. And as you point out, Racket has changed a lot also. I think it has been some years that Racket stopped calling itself Scheme, feeling the changes they have made have deviated too far.


In my (abandoned) Racket book (which had a very specific positioning), I was going to introduce some of the non-Scheme iterators like `for/list` at the start, to help people do commercial work on the first day... and then later teach some more old-school Scheme-idiomatic ways to do things, with named-`let` and avoiding mutations to shoehorn. (Then, once people were comfortable with the less-familiar way, they could decide when to use which.)


> I wrote a Macro which adds identifiers to your program based on a SQLite database's column names.

Cool! F# has "type providers" which seem to be similar to this. Do you have any experience with them? I haven't used them but this sounds like an extremely powerful idea that more languages should adopt.


Thanks! I found some type providers docs [0] which is at some level an ML version of my Racket blog post. I had no idea type providers existed. The closest inspiration I had was probably Kotlin SQLDelight or Hibernate java codegen which (probably unnecessarily) requires in-code annotations. Anyway, that's very cool!

The F# type provider looks so stateful ;) Although they are pretty similar, I think the racket pattern syntax is very beautiful. Once you grok the pattern variables vs syntax objects vs quoted lisp data anyways.

Racket actually generates code. It could generate Typed Racket type expressions too. It makes me wonder how LINQ is implemented--my lisp hubris makes me guess it's not as simple as the equivalent Racket macro, but I could be wrong.

I guess Protocol buffers are a mainstream similar system to all this--take CSV file input instead of proto file input. Of course that requires an external compiler for most languages, whereas the racket protocol buffer plugin basically just converts the .proto file to S-expressions and all the codegen happens within Racket ;p

[0]: https://docs.microsoft.com/en-us/dotnet/fsharp/tutorials/typ...


F# has this kind of stuff also, where it will dynamically type data dynamic sources like databases or rest calls, etc. for an example https://fsprojects.github.io/FSharp.Data/


It's not really dynamic typing, because it's happening at compile time. Really type providers are just a specific, restricted form of code generation.


You're a lot more pessimistic here than in your blog post!


Not really. I'm just saying the same stuff in a different #lang-- I mean in different words :p


That is awesome.


One aspect of Racket that I would expect to appeal to students, but which does not appear in this blog post, is its cross-platform (and widget-native!) GUI framework:

https://docs.racket-lang.org/gui/

The first "side-project" I ever did was a tic-tac-toe program in Java AWT during my first year of programming in high school. AWT wasn't even part of the curriculum, it was just what I gravitated towards as a 13-year-old whose experience with computers consisted entirely of graphical applications.

Maybe the kids these days would be more interested in building a web app or something, but frankly I don't think it's surprising that writing code that primarily consumes and emits text at a terminal is not interesting to students who have never had any need for a terminal before.


I love racket/gui and recently shipped a product using it for a diagnostics tool! The killer features are as you said that it uses native widgets, but also that it doesn't require any sort of dynamic linking to heavyweight C libraries (other than the ones Racket itself comes with) and that you can easily compile it in a redistributable form with basically no dependencies for all platforms. I can't recommend it enough.


Can confirm, I very quickly picked up Swing when I started Java programming just because I wanted to make little games and cellular automata. Before that, we'd had Haskell forced upon us and the GUI stuff had been very difficult for a beginner to pick up.


I tried (not very hard) to see some screenshots but failed. Do you know of a gallery with example code?


https://alex-hhh.github.io/2021/09/screenshots.html is a nice gallery of one person's apps.


Great writing and I agree with these points. I learned Racket by working through HTDP [1] and although I've never used Racket for anything other than those exercises it was totally worth it. I think it (the language and HTDP) massively improved the way I think through, organize, and write everything else.

[1]: https://htdp.org/


Nice write up.

You mention SICP, but I’m wondering if you are using HTDP http://htdp.org/ ?

I’ve always been struck by the vision the authors state in the preface;

> everyone can design programs

>

> and

>

> everyone can experience the satisfaction that comes with creative design.

>

> Indeed, we go even further and argue that program design—but not programming—deserves the same role in a liberal-arts education as mathematics and language skills.

>

> A student of design who never touches a program again will still pick up universally useful problem-solving skills, experience a deeply creative activity, and learn to appreciate a new form of aesthetic.

I’m not a good programmer - I’m not even a average one, but the book and learning Racket has made me better.


We actually use a collection of notes written by faculty rather than a book. But the course is much more in the style of HTDP than it is in SICP, we don't cover anything about compilers/interpreters, for instance, but the class focuses on problem solving and decomposition.


I actually learned Racket in the University of Tübingen last fall too and made the exact same experience as the author. I think it's a great language to learn recursive and functional thinking and to get better at finding elegant and compact solutions for complex problems. Many students of my class were unhappy with the language choice at my university too and I really can't understand it because I had so much fun with such a different approach to programming.


I always say my appreciation for functional programming comes from having spent enough time in imperative hell.

That being said I think learning programming in an "expressions only" environment can enable the student to deal with more complex problems earlier, merely by making certain types of errors impossible.

I think the Elm programming language is the sweet spot for that.

- it does have elegant (Haskell like) syntax

- it has still a simple syntax because it deliberately omits certain features (namely typeclasses, do-notation)

- it has a self contained build system (compiler, package manager, repl, dev server) with a rich ecosystem of libraries

- it is comparatively easy to ship something tangible because it compiles to JS for the browser


It is interesting that recursion is still considered to be so novel on blogs and forums. From what I have seen of current CS curriculums, recursion is usually taught in the first year immediately after introductory CS. It is almost impossible not to use recursion when dealing with tree structures (using an explicit stack takes a lot more code, most coding interviews don't ask for an iterative solution unless absolutely necessary).


After 20 years of software development in a variety of languages and projects, I've come to the opinion that recursion really should be relegated to being a novelty.

Few languages offer tail-call optimization. If we look at popularity of languages, then the likelihood any particular solution is going to end up in such a language is vanishingly small. For the remaining languages, fitting into the stack depth is easy to get wrong. It's easy to end up in an infinite loop that ends in an obscure error with a confusing, verbose, needle-in-hay-stack stack frame that over complicates debugging. And frankly, lots of programmers don't have a good handle on it, even the ones who studied CS in college, so asking them to maintain recursive code written by someone else usually ends pretty poorly.

So when we consider that, actually, no, it's not true that using an explicit stack takes "a lot more code" (it's like, three extra lines), recursion starts to look more and more like a code smell, a smell indicating the originating programmer is more a temporarily-embarrassed mathematician than an engineer.

To be clear, I have no problems conceiving of and implementing recursive solutions to problems. But every single one of my recursive solutions has eventually failed to survive contact with real life data. It might be next week when the code goes to production or it might be a year from now when the data processing needs grow, but I've eventually replaced them all.

I'm not the only one with this opinion. Many safety regulations covering vehicles from cars to spacecraft explicitly ban recursion because of these issues. We need to just stop with the fetishising of math in software development.


It's great to have support for tail recursion when implementing recursive solutions, but it is a tall order to add to more traditional languages. However tail recursion is not the only trick Racket has up its sleeve.

In Racket you can't get a stack overflow.

At the time a potential stack overflow is detected, the oldest parts of the current stack is copied to the heap and a fresh stack is introduced. When the stack underflows the saved stack is reinstated. [At least this is a rough idea of how it works - implementation details differ to make it process efficient.]

This means that you can rely on recursive solutions even when the recursive call occurs in a non-tail context.

This idea (no stack overflows) could be used in more traditional languages too. As you write in traditional languages there is a real risk to bump into the stack limit.


Sounds very good, but you make it sound like there are no considerable downsides. Why aren't more languages doing it?


Beats me.


In web/gui programming it’s a good fit. Your data is a tree that you render on a screen, so it’s by default relatively shallow and not very big. It’s never going to blow the stack and the benefit of using recursion, closures and functional composition is you get clear, dense code and tend to make fatter data structures and more general code.


Yep. Same thing with filesystems. Since they're meant for humans to traverse, finding a directory structure with deep enough nesting to blow up the process stack or hit a runtime recursion limit is sufficiently rare that you shouldn't even worry about it. I'm sure there are plenty of other examples where the practical limit you'll see in much lower than what an address space stack can fit. We had a task at an old job to create dependency and build trees of every service in the overall project and then operate on them. Even assuming a language runtime with a very small limit, say Python with its 997 or so default maximum simultaneous stack frames, we didn't have 997 individual services, so even the maximally malicious dependency tree couldn't have exceeded that limit.


> And frankly, lots of programmers don't have a good handle on it, even the ones who studied CS in college, so asking them to maintain recursive code written by someone else usually ends pretty poorly.

Honestly that speaks to the caliber of programmers at play here. Sure, the average bootcamp grad probably can’t use a recursive function, but a real software engineer should have mastered it during any serious intro class. Without recursion I’m not sure how we would even be programming (think of all the parsers and grammars relying on it!).

> So when we consider that, actually, no, it's not true that using an explicit stack takes "a lot more code" (it's like, three extra lines), recursion starts to look more and more like a code smell, a smell indicating the originating programmer is more a temporarily-embarrassed mathematician than an engineer.

It sure is a code smell if the average programmer in a given org can’t understand it.


>it's not true that using an explicit stack takes "a lot more code" (it's like, three extra lines)

I don't know if you mean that literally or just as a rhetorical exaggeration, but it's absolutely false. The first example that came to mind is Quick Sort, search for the iterative version of the algorithm and compare it with the traditional recursive version, easily a 10x factor. The increase in complexity isn't linear in code size either, the code that replaces the simple recursive calls is full to the brim with loops, state and conditional mutations, i.e. Free Bugs, yummy yummy.

>Few languages offer tail-call optimization.

I was hacking on a (non-tail-) recursive solution to a problem a couple of days ago in a repl.it container, and a buggy solution exploded after consuming 15000 stack frames. The repl had 1024 MB RAM as a max limit. The point is, absence of tail cail optimization is rarely if ever a real problem in a language that has loops, its a big, big problem if the language doesn't have loops, but as long as the language has loops and relegates recursion to the "Side Role" its so good at, its just not that of a big deal.

This also fails to consider why recursion is such an attractive algorithmic construct, tail recursion is absolutely trivial, it can be transformed to loops mechanically, it rarely adds anything of value to the readability and expressiveness of code.

The vast majority of interesting and useful uses of recursion is the non-tail-recursive variety, where you have to use a stack anyway, at this point the whole argument reduces to whether you can maintain a stack more efficiently and readably than the machine, which in my view ends in you losing to the machine 9 times out of 10.

>And frankly, lots of programmers don't have a good handle on it, even the ones who studied CS in college, so asking them to maintain recursive code written by someone else usually ends pretty poorly.

If those programmers were asked to instead maintain the explicitly iterative version of the code, I bet it would end even more badly. Recursion, after the initial cost of watching a few tutorials, is less confusing than loops. It's about hiding state, how on earth is explicitly managing all that state yourself better? This is like saying that "2+46/(3+1)4" is best calculated as a long sequence of 2-argument assembly instructions because its more explicit that way, its more explicit, true, but - paradoxically - way more obscure.

>Many safety regulations covering vehicles from cars to spacecraft explicitly ban recursion because of these issues.

Embedded Systems safety regulations and conventions ban a lot of things, recursion is not special at all. One thing is dynamic allocation, another thing is multiple returns out of a subroutine. Are you ready to give up those 2 things?

It makes no sense to take the conventions of a very specific and idiosyncratic industry like that and try to derive from it universal rules that should govern all software.


> Embedded Systems safety regulations and conventions ban a lot of things, recursion is not special at all. One thing is dynamic allocation, another thing is multiple returns out of a subroutine. Are you ready to give up those 2 things?

Indeed. On one automotive project, we could not use C++ strings, due to the hidden dynamic allocation.


Do you have a source for the embedded systems safety regulations bit? I was under the impression that governments had neither insight nor competence to decide such matters.


The regulation I was thinking of is MISRA[1], a very popular piece of convention that is used and referenced in, among other things, the JSF program (F-35 fighter airplane), the Jet Propulsion Laboratory (NASA), and AutoSar (another standard for embedded system in automotives).

From [2] (2012 version) (warning : annoying ads and misleading download button takes you to registration, but it's the only free source I could find with detailed rationale in each rule) :

Rule 15.5 : A function shall have a single point of exit at the end

Rule 21.3 : Memory allocation and deallocation functions of <stdlib.h> shall not be used.

From [3] (2004 version) (pdf) :

Rule 14.7 : [Same as Rule 15.5 of [2]]

Rule 20.4 : Dynamic heap allocations shall not be used

[3] is a pdf containing the 2004 version of the conventions, but no rationale is given for each rule.

>governments had neither insight nor competence to decide such matters.

On the contrary, things like banks, medical devices, aerospace and defense are heavily regulated by the government, source code is just another component in the whole system, why shouldn't it be regulated like hardware ?

As for competence, the government is not all bureaucrats and officials, they can hire specialists to do their inspections.

[1] https://en.m.wikipedia.org/wiki/MISRA_C

[2] https://www.academia.edu/40301277/MISRA_C_2_012_Guidelines_f...

[3]https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...


And your non-recursive solution will usually be faster too.

It Will usually offer simpler opportunities to parallelise too.


So... unless Grinnell does things very differently, the author did not learn Racket, they learned (Beginning/Intermediate/Advanced) Student Language, and that's the point.

Racket is a complicated language, designed primarily in order to support the easy creation of other languages: it would be as bad a choice (perhaps worse) for a first language as any other. All of the simplicity, focusing on learning to program, programming structurally based on data, that comes out of HtDP, is enabled by the restricted language.

It's too bad that this naming confusion persists, as I think it hurts the effort to focus on teaching _programming_ in intro classes, vs. teaching X language (I don't want to teach people Racket any more than I want to teach them Python or Java. They can learn those on their own -- they'll probably learn at least a half dozen other languages over their career, all on their own, if they stick with it). This curriculum is about figuring out how to most effectively teach people to program: the language was created, after the fact, to support that.


Fair point although I feel like Racket deserves credit for the way it builds upon itself. By design it allows you to have the Student Languages that are each iterations upon the previous, and all are valid Racket programs.

As a counter-example you can't do much of anything in Java without introducing class and public-static-void-main-string-args.


I think this still, somewhat, misses the point.

Sure, Racket has facilities to make implementing languages easier (that's the whole point), but that simply makes it _easier_ to implement BSL, etc. I could certainly, with more effort, implement BSL in Java or in any other language: the language that students are using really has nothing to do with the _host_ language. Things are somewhat confused by DrRacket (or, made simpler, depending on your perspective); the IDE is somewhat mixed up with everything else, but again, this functionality can and does exist in other editors: I could implement a mode for my BSL implemented in Java in, say, VS Code, with comparable features to DrRacket.

Or, to take a completely different tack, implement the IDE via the web a la WeScheme or, if you want an actual concrete example of something that isn't Racket hosted, Pyret.


What would be the point of implementing BSL in Java? You'd still have to teach the basics of object oriented programming and all the details of Java's implementation... unless you abstract all that away to the point were you aren't teaching Java anymore. BSL being easy to implement isn't as important as BSL being a useful subset of Racket that's easy for students to understand.


I think it's important to show students how to think about languages as a set of tools. OP's post stabs at this quite well, with the "Languages are less important than you think" section, and as someone who has already jumped across 5 or 6 languages over the course of his just over 10 year pro career this definitely rings true.

I think it's fine to bag on languages like Racket, but a good engineer should be able to hop across any language and at least be able to read and understand what they're doing.


I think it's a good choice if you're majoring in CS, or anything closely related where you are very likely to learn another general-purpose language anyway (or have done so), then it's an awesome way to see different paradigms.

I think it's a bad choice if this is your only programming course during your studies, then it would be much more helpful to learn something that is a safer bet you'll be using in your life continuing forward. Yes, even JavaScript. If you're interested, you'll pick up another language and apply what you learned here - but if not, then I think there's a better chance you'll recall what you learned with the mainstream model and work from there.

This is my unscientific opinion, but it's based on all my experience with non-programmers where you can be happy if they learned _something_ at all, they'll most often see themselves as "I learned some Python" for example, and are willing to expand from there, but they're more reluctant to transfer the knowledge to learn a new language. I don't think it's completely off, either, even drawing a parallel to learning natural languages (to a small degree, not full working proficiency).


Comment unrelated to the article: I knew I know that blog theme from somewhere, turns out it is my own https://github.com/oltdaniel/dose theme. Thanks for using it. Hope you like it.

Comment related to the article: I always wanted to dive deeper into functional programming languages. Especially, because I more and more run into stuff, that is just simpler to model in a recursive pattern. Maybe I'll give it the a new try within the next weeks and see how far I come.


Thanks! I do like that theme and I only made a few tweaks to it with some of the code styling and the header.

And I'd definitely recommend it, programming in FP languages has been a bit of an addiction for me.


would've put my money on this coming from a Northeastern student, nice write up either way


my money was on University of Waterloo (biased) or MIT.


Nice writeup, keep it up!


I agree, keep writing.

I have spent a lot of my professional life using Lisp languages. Not as popular as Python, Java, etc., but Lisp (and other languages like Haskell) have a good effect on how we think about computation.


Thank you!


Ah, debates about programming language choice for CS1, without evidence. Gotta love the CS1 language wars. Someday some folks will actually gather evidence that really let's us know if these interesting claims about these languages hold any water. I don't think we've made much progress on that as a community. Fortunately, the people involved in these projects are doing more valuable things with their time.


> Someday some folks will actually gather evidence that really let's us know if these interesting claims about these languages hold any water. I don't think we've made much progress on that as a community.

Ironic that you should post this, having not done any due diligence on your own part.

Shriram Krishnamurthi, who, incidentally, was one of the original members of the Racket team, is at the forefront of CS education research, and one thing he thinks about is how to introduce people to computer science. This, of course, includes the choice of language. He, as well as his wife and two former students, wrote and recently published the book "A Data-Centric Introduction to Computing" (DCIC) [1], which is meant to be a new kind of CS 101 textbook.

In section 1.8, "Our Programming Language Choice" [2], the authors write about how Python is now a common introductory language choice that also enjoys industry use, but it can lead to frustrations for new students of computing, so they didn't want to use Python. Instead, they developed their own language, Pyret, for teaching. (Pyret is also used in another initiative of Shriram's: Bootstrap [3].) It's worth pointing out that although Pyret is very similar to Python in many regards, its spiritual heritage certainly includes Racket. Racket was designed at the outset as a language for teaching programming, and this desire to invest effort in tools for beginners has stuck with SK for the duration of his decades-spanning career. I'd suggest looking through his publications, blog posts, and Twitter feed for works/notes about CS education with regard to language choice. He's certainly not quiet about it.

All this is to say: people are working on getting to a solid answer about what language is best for introductory material. We just haven't come to a definitive conclusion yet.

However, I think we should leave all that aside and focus on something else: the original blog post for this HN thread is not a "debate", as you've suggested. It's one student's perspective. They are absolutely allowed to think aloud to the internet and share their resulting perspectives. They didn't claim to have all the answers. I think their post was rather well-written, exploring things that this student felt made learning programming easier. If that's not worth discussing, I'm not sure what is.

[1] https://dcic-world.org

[2] https://dcic-world.org/2022-01-25/part_intro.html#%28part._....

[3] https://bootstrapworld.org


I know Shriram very well, he and Kathi do excellent work and as I said they are focused on more valuable projects than trying to prove one language is superior to another for CS1. They are building tools and curricula, and have done some research that sheds some light on the tradeoffs in using certain languages like Racket. Kathi's "Rainfall Accumulates" paper is one of the few papers that gets close to work in the space I am talking about. Shriram's analyses often leave me stunned at just how sharp that guy is, and helps remind me that CS Ed really needs to level up its game.

That said, the HtDP curriculum that they were involved in, and led to Bootstrap and a host of their other projects, was not grounded in any kind of data collection or treated as an empirical research project. Smart people with good intentions built something and threw it into a classroom. Intentionally so, by design [1].

Their subsequent work with Pyret is much better grounded and well-informed, and I've been absolutely fascinated by it. Their data science curriculum delights me, and I have often thought that if I had more time I'd translate it to Python. I think their more recent proposal about a Table Abstract Data Type is just inspired. But we are still VERY far from ever proving things like, "Racket is easier for students to learn". I'm fairly sure it's not even worth doing.

I appreciate the original article was just one student's perspective. It's a perspective I don't entirely disagree with. Nonetheless, they make interesting claims not backed by any kind of published, empirical data. They hypothesize that Racket is virtuous to learn because of how it presents recursion, and because of its simplistic syntax. Cool theory, but not proven using Randomized Controlled Studies. I mean, most things in CS Ed aren't, so I don't think it's a surprising thing to point out. But that doesn't mean we should just sit back and accept these claims.

Personally, I have seen the HtDP curriculum do damage. I have also seen it firsthand do a lot of good. My Racket relationship is complicated [2]. The author makes a great point of the potential value of these models, and also mentions how this whole debate is sort of pointless. I believe that at the end of the day, the CS1 language is simultaneously very important and yet somehow unimportant - it's all just the psychology of the people involved. That, too, is an unsubstantiated claim.

But please, tell me more about how I have not done my due diligence. Would you like to cite some actual RCT papers that back up the claims founding all this stuff? Or do you want to keep citing textbooks like that somehow proves something?

[1] https://felleisen.org/matthias/Thoughts/Measuring_education.... [2] https://github.com/acbart/myracketrelationshipiscomplicated....




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: