Hacker News new | past | comments | ask | show | jobs | submit login
Pyret Programming Language (pyret.org)
210 points by kristianp on Aug 22, 2021 | hide | past | favorite | 134 comments



Some past related threads:

Pyret – A language exploring scripting and functional programming - https://news.ycombinator.com/item?id=13185759 - Dec 2016 (267 comments)

A Case for the Pyret Programming Language - https://news.ycombinator.com/item?id=11986977 - June 2016 (37 comments)

Start Coding in Pyret - https://news.ycombinator.com/item?id=9070834 - Feb 2015 (22 comments)

Pyret: A new programming language from the creators of Racket - https://news.ycombinator.com/item?id=6701688 - Nov 2013 (283 comments)


I love this. I think people often underestimate the value of dynamic runtime checks. For certain applications, e.g. many types of scientific/exploratory software, dynamic checks and static checkcs have nearly identical utility. But static checks can get really tricky to work with as a developer.

I built a library for doing modular runtime verification in Python (https://github.com/mwshinn/paranoidscientist) and evaluated it for scientific software. In the end, it was pretty effective, but not perfect, and there are some major changes I would make if I were to do this again. One problem was that some of the most important cases to check were important because they were difficult to check. (E.g. some function arguments change the meaning other arguments - this is extremely common in major frameworks like numpy/scipy.) By contrast, the flashiest feature in my package was runtime checking of hyperproperties (i.e. checking properties like monotonicity or concavity that depend on relationships between multiple executions of the function), but this was rarely used in practice.

The two most common criticisms I hear about runtime checking are (a) it is just an assert statement under the hood, and (b) the performance hit is unacceptable and the only solution is static checks. Regarding (a), sure, they may reduce to assert statements, but most idioms in programming also "reduce to something under the hood". The question is whether dynamically-checked (refinement) types/predicates are a useful abstraction, and in my experience, yes they are. Regarding (b), you probably don't want to use runtime checks on software intended to be run primarily by people other than the developers. But many classes of problems, software is written as a means of discovery rather than as a tool for someone else to accomplish a particular task. For these problems, runtime checks and static checks are approximately equally useful. Static checks are nice to avoid because they can get you into deep water really quickly, so their scope can be quite limited. Also, people tend to overestimate the performance penalty of dynamic checks. Even complex checks often incur no more than a 10% performance penalty.


hey, I like your work on Paranoid Scientist. I've never come across hyper-properties: is this something you invented?

Regarding hyper-properties: I assume they only work on immutable data values, otherwise it would be hard to manage historical objects so that they can be part of any of the predicates.

I'm working on a similar project that you may find interesting: https://odipar.github.io/manikin/. I may want to include hyper-properties in future releases.

edit: found this on hyperproperties: https://lamport.azurewebsites.net/pubs/hyper2.pdf


Glad you like Paranoid Scientist, and thanks for the link to your project!

Yes, you're right that it only works on immutable data types. At first I had implemented something which copies the object, but this used way too much ram and ended up being quite buggy with a lot of edge cases.

I didn't invent the term hyperproperties (sadly). If you do end up implementing hyperproperties in runtime checks, you'll definitely want to look into to doing a statistical approach. The key to making this work is reservoir sampling (https://en.wikipedia.org/wiki/Reservoir_sampling) - there are some more details of how I did this in the Paranoid Scientist paper (https://arxiv.org/abs/1909.00427).


> https://lamport.azurewebsites.net/pubs/hyper2.pdf

What the parent post talks about is not even close what is described in that PDF above.

"Proving" (or better verifying) stuff at runtime is trivial as it's only some asserts at the end.

Analyzing the properties of programs before you run them is in an entire different league OTOH.


I like the idea of attaching tests to the function itself - oftentimes I'm finding myself combing through the unit tests for various components to understand the expectations around how they are meant to be used (especially when they are not documented well enough), I feel this approach could help in this aspect (and also with adding new test cases that could have been missed).


And in python you can use doctests (tests inside de docstrings of functions)


That's also pretty nice, although elevating those tests to be first-class-citizens makes more sense to me. My approach is that "code is truth, despite what the comment says", and docstrings feel too much like comments, but I haven't used Python professionally that much so I just may not be used to those.


I haven't used doctests in Python, but I have in Rust. It really helps that any code blocks in a docstring is a doctest by default, so they ensure that examples in your documentation stay evergreen. It's nice to know that if there's a code example in documentation, it will compile and run properly and hasn't been broken.


Practically speaking how useful are the Rust doctests?

I thought they looked like a genius idea but I don’t get how they work 95% of the time when examples don’t come with all the plumbing required to make valid state for a function to actually work on.


There's a feature for that: you prefix the plumbing with `#`, and it runs in the doctest but doesn't render on the doc page. https://doc.rust-lang.org/rustdoc/documentation-tests.html#h...


That would help. I’ll have to experiment to better understand if it all works well enough.

I’m thinking about Python unittest having setUp function for preparing every test without repeating myself. Maybe Rust has a similar way to define setup code for every test.


I'm looking to do something in this general area better than doctests. Looking for collaborators.

http://adsharma.github.io/pysmt


Elixir (Rust too IIRC) check code in doccomments for correctness.


I was excited that Pyret, which I was previously familiar with as a Racket #lang, had evidently gained enough independence to warrant its own website that does not even mention Racket. What a vindication of the #lang system! Then I read this:

"Ultimately, Racket’s #lang facilities, though designed to create new languages—and a great prototyping ground for Pyret—proved to not be quite enough to support a language creation process of the scale of Pyret"

I find this rather sad. The PLT group, dogfooding their flagship product's unique selling point, was not able to achieve their goals and switched to Javascript. I hope that one day it may once again be a Racket #lang.


Is #lang really Racket's USP?

I vaguely get that the idea is an extension of the Unix "shebang" convention to files which define modules. It might be preferable to a system based on file extensions or file associations (it's certainly less brittle). I guess the question is whether it's better than directives of the form "import(lang, file)", in other words, explicitly naming the interpretation language for each file that's imported. Naturally one could set a default so that a plain "import(file)" would do the right thing. A project full of files in extremely heterogenous languages would make make #lang more appealing, but it would also be possible to build some kind of file association system within Racket—personally I would prefer to put the #lang info in the filename or the path than stick it ("intrusively") into the file.


I think they were using "#lang system" as a synecdoche[0] for "Racket's language support". Racket's USP is definitely that it's a platform for language design and implementation[1].

On the other hand, I don't think it's a big deal they moved off of Racket. I actually think it's a success story that they were able to build on the platform and then move off.

[0]https://en.wikipedia.org/wiki/Synecdoche [1]https://en.wikipedia.org/wiki/Racket_(programming_language)


I like your positive attitude towards it - when you phrase it like that, it does indeed sound more like a success story. You may have changed my mind!


#lang is much more than shebangs. Racket #langs are implemented on top of Racket - they can be nothing more than alternative syntaxes, or they can have different semantics (written in Racket). This has the effect of rendering different Racket #langs interoperable within a project. This is wildly powerful - someone reimplemented Python entirely as a #lang, for instance, which meant that Racket could use (pure) Python libraries directly, and vice versa.

It's an entirely different philosophy, which aims to allow to use "the best tool for the job" even within a project. Need a DSL? Write one!


I understand all that. You don't seem to have understood my comment, which was asking questions about the suitability/fitness of the "#lang" mechanism, rather than questioning the idea of polyglot code on top of Racket.

I guess it seems to me like it hasn't been established whether the "#lang" mechanism is a brilliant invention, or something cheesy. My initial reaction is that it seems cheesy.


I'm not sure I'm following, then. You think the basic idea of polyglot programming on top of Racket is sound; but that's what #lang does, and is the interesting bit of what we're talking about. The fact that it's specified with a line at the top of the file is a boring implementation detail - albeit the correct decision in my view, since the information about what language a file is written in clearly belongs with that file - exactly the same reason we have shebangs. But it seems like a minor thing to pick on to deem it "not Racket's USP".


If I understand correctly, the main problem was that the Pyret project wanted Pyret to run directly in the browser, so they needed a JavaScript-backend. Since Racket doesn't have a JavaScript-backend, they wrote a compiler in Pyret that compiles Pyret to JavaScript.

I don't see the lack of a JavaScript-backend as failure of the language building facilities of Racket. In fact, the ability to quickly build a new language in Racket, made it possible to try out different features quickly. Also, it made bootstrapping possible.


However, they don't have to turn around and slag it in their documentation, which seems like a dick move.

Switching to Javascript has nothing to do with "scaling". They wanted to go to JS, and simply jettisoned their bootstrapping booster rocket once they got there, because they didn't want Pyret users (including maintainers) to have a dependency on a Racket installation that is otherwise irrelevant to the running code.


> # this is true > ((1 / 3) * 3) == 1

This is a slippery slope. Early versions of Dart tried to use rational numbers by default, but then it was cancelled because denominator size can grow exponentially.

Also, I seriously doubt you can make a language that will correctly resolve checks like this one:

sin(pi / 6) == 1 / 2


I've used rationals by default in Racket/Scheme for literally 3+ decades now. It's fine. Dart is trying to be an industrial-strength language. We're trying to create an awesome initial programming experience. To the beginner, those denominators are nowhere near as much of a problem as "numbers" that don't make sense.

But we draw limits. Did you try the `sin(pi / 6)` example? You'll find that Pyret produces a rather interesting result that is also instructive. (Hint: it will produce neither true nor false.)


I'm a bit dubious that magic stuff like this really makes anything easier. It's like JavaScript's truthiness and type coersion. It's trying to be friendly and work most of the time, but in reality it is more complex because now you have to remember a whole set of rules about how exactly the magic behaves.


My guess is that you didn't try @skrishnamurthi suggestion:

> Did you try the `sin(pi / 6)` example? You'll find that Pyret produces a rather interesting result that is also instructive.

The result of this doesn't feel like magic and doesn't resemble "JavaScript's truthiness and type coersion" at all IMO.

Here's a live code editor where you can try it out: https://code.pyret.org/editor

I found it to be a very refreshing and intuitively-guiding experience :)

---

Spoiler alert!

So, fist i had to find how to access pi and sin(), which was simple enough googling for documentation. Typing `num-sin(PI / 6)` on he editor already gives an interesting result:

  ~0.49999999999999994
That ~ there looks suspicious!

Then i tried `num-sin(PI / 6) == 1 / 2`, which seems to be syntactically invalid and yields an awesome error message:

  Reading this expression errored:
  
  num-sin(PI / 6) == 1 / 2
  
  The == and / operations are at the same grouping level. Add parentheses to group the operations, and make the order of operations clear.
Kudos to the Pyret team for such a nice error message!

So finally, adding those missing parens, `num-sin(PI / 6) == (1 / 2)`, yields another informative and useful error message:

  Attempted to compare a Roughnum to an Exactnum for equality, which is not allowed:
  
  The left side was:
  
  ~0.49999999999999994
  
  The right side was:
  
  0.5
  
  Consider using the within function to compare them instead.
All in all, i'm very impressed with the clarity and approachability of the language. Thanks @skrishnamurthi for the suggestion of trying this out! :D


What's the point of an infix syntax that demands obvious. unnnecessary parentheses, as a matter of correctness?

"grouping level" is a cringe-inducing dumbing down of proper terminology. It suggests associativity, but associativity does not have levels. It must be referring to "precedence".

= and / are not at the same level of precedence in basic mathematics!

In elementary school you learn

  y = x / 2
without requiring parentheses.

It makes sense to require parentheses when exotic, unfamiliar operators are involved for which the precedence and associativity would be hard to remember.

Not for operators drilled into your head by sixth grade.


Waaaay less magic than Javascript. Looks like the rules are described here. Two kinds of numbers: exact and rough. Unless you use trig functions it will all be exact. https://www.pyret.org/docs/latest/numbers.html

Fun fact about Javascript: there are string and number constants x, y, and z, such that `x < y` and `y < z` and `z < x`. No NaN shenanigans, just regular small strings and numbers. Hint: it involves a string containing digits that starts with "0", thus sometimes getting implicitly coerced to octal.

Think about what this means for sorting!


The Wolfram language successfully resolves it:

https://www.wolframalpha.com/input/?i=sin%28pi+%2F+6%29+%3D%...


Wolfram works with the full mathematical expression tree. It works very well in practice but the equality test is undecidable in general.


Yes, I should have written "practical programming language". It definitely possible to resolve many equations like this (though not all of them), but is quite a slow process, which is ok for a math system like Mathematica, but not for a real-life programming language.


"Real-life programming language" or "practical programming language" assumes that there's only one acceptable set of trade-offs for all "real-life" or "practical" problems. I suspect that what you actually meant was "a programming language designed to create industrial-strength consumer or business software".

As one of the maintainers has noted repeatedly on this thread, Pyret is a "real-life programming language" whose purpose is educational. Mathematica is a "real-life programming language" whose purpose is solving math equations. Performance isn't much of an issue in either case, so it's a perfectly acceptable trade-off.

That these languages weren't designed with your use case in mind doesn't make them illegitimate or inferior languages.


I ran into this issue once while trying to render the Mandelbrot set while learning Haskell. It defaulted to a rational type based on two bignum integers with unlimited precision.

With every Mandelbrot iteration* the integers doubled in size, producing exponential complexity. With 100 iterations this essentially means that the program never completes and even runs out of memory at some point.

The newbie-friendly feature turned out to be not so friendly.

* complex z_{n+1} = z_n^2 + c


> Early versions of Dart tried to use rational numbers by default, but then it was cancelled because denominator size can grow exponentially.

While I'm generally in favor of rational numbers as a default and believe that its "danger" is no different from that of big integers, there is some truth in this because Guido van Rossum of the Python fame has said the same thing [1]. It seems that you need some adjustments if you are already familiar to languages where rational numbers are not default.

[1] https://python-history.blogspot.com/2009/03/problem-with-int...


The number stack goes like this: naturals -> integers -> rationals -> reals.

Trying to simplify this by removing any stage doesn't really work. sin() over a rational should be a compile time error.


PI/6 is not rational.


That's exactly what I said.


Really? Even our Logo interpreter successfully resolves that one (although you have to use radsin instead of sin because our sin works in degrees)


A new book which serves as an introduction to CS in Brown university has been released just yesterday. It uses Pyret.

https://dcic-world.org/2021-08-21/index.html

PAPL - https://papl.cs.brown.edu/2020/ is an older version of the current book.


Thanks for including this!

Tweet thread explainer: https://twitter.com/ShriramKMurthi/status/142918126348784435...


The ruby comparison is unfair:

    o = Object.new
    def o.my_method(x)
      self.y + x
    end
    def o.y
      10
    end
    o.my_method(5) == 15 # true
    method_as_fun = o.my_method
    # Wrong number of arguments, 0 for 1
The last line is actually doing: method_as_fun = o.my_method()

Which should make the error message obvious. Ruby makes this syntax optional for readability purposes, which becomes obvious once you start going through real world ruby code.


Yes, that would be:

    method = o.method(:my_method)
    method.call(5)
    # or method[5], or method.(5)
If you really want a "function" that you can call with parens:

   Kernel.define_method(:method_as_fun, &method)
(Definitely don't do that last bit in a real project...)


And before someone criticizes Ruby for being inconsistent in how the method is called, Ruby isn't a callable oriented language [1]. The common case is calling methods, so the fact that you need to say `.call(5)` both makes it more consistent and makes real code more concise, especially when making DSLs where chaining is really useful.

1: https://yehudakatz.com/2010/02/21/ruby-is-not-a-callable-ori...


I think if you put more newlines in your code example (at least two per code-line) it might come through the HN formatting thing better?


The way to do the last bit in Ruby is:

  method_as_fun = o.method(:my_method)
  method_as_fun.call(5) # 15


Never heard about it before, it looks well done.

Very interesting language.

First a great point about the homepage: it goes directly to business, no marketing buzzwords, no stupid mascot, one glance at that page and any developer gets a very good idea of what that language does and doesn't.

Second yes, they are very interesting choices and functions.

However:

https://github.com/brownplt/pyret-lang

It's implemented in Javascript? This is a kind of no started for the kind of app I do, personally. But as a stand alone language which interpreter would be written in C/C++ and could be dropped into an existing native app, it would be interesting.


Well, actually, it's implemented in Pyret itself (-:.

But it currently compiles down to JavaScript. Our goal is to create an awesome language for certain educational settings, and many of our users (e.g., schools) are on restricted machines that can't install desktop software. The browser is the one platform they can easily access, and it also saves teachers from turning into sysadmins (to install, upgrade, etc.).

Hence our browser-based delivery. There is also a CLI. And in principle, someone could write a back-end that targets native code, etc. We don't have the resources to do that right now (but it's all Open Source).


Overall this looks quite nice. I like the way types can be defined but if they are not they can be anything.

Are these types checked anywhere? I think they should be checked at compile-time wherever possible, and optionally at run time too (I say optionally as the checks might make it slow, so they could be checked in development but perhaps not in production).


IIRC, they're checked at run-time but not compile-time. And to avoid the speed issue (you're right: it really can get really slow), it only checks the top-level type. E.g. `List<Number>` just checks that it's a list.


> they're checked at run-time but not compile-time

Do they intend to add this? I think it would make a big difference. It would make thje language more like Haskell, where lots of bugs would be caught at compile time.

> it only checks the top-level type. E.g. `List<Number>` just checks that it's a list.

That's fair enough because otherwise it might have to go though potentially very big data structures.

Maybe there could be a command `strict_check(aDataStructure)` which would only get executed when type checking is on and would recurse into a data structure checking everything.


It is used to to teach children. Spoken the name sounds like pirate. No snark intended.


I'm not sure if this was intended as a reply to my comment.

In any case, I would suggest that a language be powerful and simple enough that it can do a wide range of tasks including teaching children, GUI apps, web apps, AI research, scripting, etc. Python does this and Pyret should aspire to do it too.


It totally was. Type checking is being worked on, they were not happy with the performance of their first implementation.


They are indeed checked at compile-time if you use the optional type checker. It's not on by default because there are still a few odd corner-case bugs in it. But the Run button remembers which state you are in, so once you switch to using the type-checker, you'll always use it unless you stop using it.

What it should do is a decision that should be made by the individual curriculum/instructor.


What pain point of Python as an education language is Pyret trying to address? Union type might be vaguely useful, although some of these use cases can be modelled as inheritance or typed enums.


IMHO, Union types are the one the most important features that should have been in programming languages since C. And I believe it would have made our software much more bug free than it is generally now. Enums in all languages should be Rust like enums with exhaustive checking and allow recursive definitions.


This is a good question! We addressed this explicitly in our (new) book that starts in Pyret and transitions to Python:

https://dcic-world.org/2021-08-21/part_appendix.html#%28part...


I have considered different constructs like "where:" to be placed in code blocks (such as function) when working on Next Generation Shell. I have decided against because these constructs will not be executed at runtime (when the function is called) and therefore I assume it will be at least somewhat confusing.

For tests, NGS has "TEST" keyword which can be placed at beginning of any line. The test is everything till end of that line. I typically place these below functions that they test.


I really like the in-function documentation and unit like testing in the where clause.

> # this is true

> ((1 / 3) * 3) == 1

This worries me. I know it is true mathamatically, but experience tells me trusting floats is a recipe for disaster. A approximate equal operator is safer.

Floats are such a leaky abstraction it is in my opinion better students know this early on.


Many languages treat the construction

   1/3
As a rational type, and do not convert to float unless you cast. Common Lisp comes immediately to mind. Evidently pyret is another.


Haskell supports rationals with module Data.Ratio, but uses a slightly different syntax: 1 % 3


`1 / 3` doesn't produce a float in pyret.


You should try it and see what it does when you force it to use floats. You may find it interesting in light of your "leaky abstraction" comment. Pyret is designed precisely to make that point clear.


it's a lisp descendant it has, most probably, a full numerical tower


Partial, not as full as Scheme.


The article doesn't really say what's going on under the hood but I'd guess that division of integers makes rationals...

> Pyret has numbers, because we believe an 8GB machine should not limit students to using just 32 bits.

Emphasis mine. WTF, author.

But the Java sample below saying "this is false" implies that floats are a problem that is being avoided. The only reasonable solution is rationals... right?


Yep, this is a project out of the same world as Racket which does a similar thing.

If you go into the editor/REPL (https://code.pyret.org/editor) and enter 1/3, you'll see that it gets rendered as 0.3 repeating (with a bar, not some arbitrary rounding) or if you click on it, directly as 1/3.


Floats are really a poor representation for reals, float-intervals like Frink does are much cleaner but our FPU/SIMD sucks for these kind just think about changing the rounding very often..


+1 for the "lambda bones" logo.



It seems like a bad idea to use the same “where” syntax for definitions and test cases, especially since a major aim of this is education.


The `where:` syntax ought to only be for test cases attached to functions. Maybe this comment is a useful answer: https://news.ycombinator.com/item?id=28265210


The syntax is pretty much like Python and ruby: “end” to declare end of block statements and colon to start a block statement. But why to use “fun” to declare a function when “def” could be used.

The same in other languages: function, func, fun, fn, etc.


Because `def` suggests there is only one kind of thing we're defining (functions), when we could be defining any number of things: functions, type aliases, algebraic datatypes, tests, etc. So we use `fun` for functions, and different keywords (e.g., `examples` for examples) for other things. You should really be questioning why `def` was used for functions.


Maybe because it should be fun to use? NSI. I prefer fun to fn and func, because it aligns nicely with four spaces a tab formatting.


python doesn't use end. To be honest, as a python programmemr the syntax looks quite alien.


Lol the syntax is basically Ocaml with a tad less noise.


I like the idea of the where clause to include tests.


I don't care for the name "where" since it makes me think that is is a place to define bindings, like Haskell. I would prefer a term like "expect" or "tests".


It doesn't read fluidly. We care a lot about how our programs are verbalized.

We were well aware of the Haskell use, but most of our users have not seen or even heard of Haskell before. So that's not a real problem here.

People who don't have Haskell experience find it utterly unsurprising that what follows `where` are the examples/tests. So I think your expectation is being set overly by your Haskell experience.


Am I the only one who came to this thread looking for good code pirate jokes?



I wish there were an Emacs mode published on MELPA, to work with Pyret



It's not on MELPA though


It seems pretty bad to have kebab-case identifiers in a language that also has an infix subtraction operator -. The only languages I know that allow kebab-case idents are lisps, where subtraction is (- prefix notation). From the examples:

    lam(actual): num-abs(actual - target) < delta end
This is a new, even worse kind of "whitespace significance" than indentation. Is it a function called `num-abs` or a variable `num` minus `abs(actual - target)`? I can't tell if the language would allow `actual-target` as subtraction.


Always requiring whitespace around infix operators is, in my opinion, easier to explain to people than "you can do whatever you want with the whitespace." It's consistent and regular. I don't think it's as much of an issue as you're making it out.

Pyret is the result of decades of research in computer science education, helmed by Shriram Krishnamurthi, who was one of the original members of the Racket project (itself a language designed for CS education in the '90s, a descendant of Scheme with kebab case and prefix notation). The full list of authors of Pyret is lengthy, but includes a number of well-established researchers in the PL and CSed communities. Knowing who they are, I would happily assume that they spent plenty of time debating this exact issue of mixing kebab case with infix subtraction, and either decided the benefits were worth the cost or else decided that the cost was practically nonexistent.

In any case, I trust their decisions better than those of someone who glanced at the website only long enough to inform a condescending (and highly superficial) comment on Hacker News.


> Knowing who they are, I would happily assume that they spent plenty of time debating this exact issue of mixing kebab case with infix subtraction, and either decided the benefits were worth the cost or else decided that the cost was practically nonexistent.

We don't need to assume when we can search. I did a few minutes of searching and found the following:

https://groups.google.com/g/pyret-discuss/c/rPe7gYBLdPs/m/3O...

Shiriam K. wrote in 2013:

> I have tried just about every possible experiment to stay closer to Lisp, including even the one you have in Wart. None of them scaled well for me. Ultimately, also, I am totally unconvinced about the idea of having an identifier named e^ipi-1 for anything other than cute illustration purposes. Since surface* syntax is designed for humans, I think a compromise between expressiveness, readability, and predictability is a good way to go.

(This is just one comment, I'm not saying that is captures the full thinking and discussion around these syntax decisions.)


It only makes more sense when you do look at the background, so thanks for the link. They're lisp folks. They use kebab-case. I completely get it. It may be the product of decades of research, but it is also the product of default choices and comfort for the people who wrote it. ALL programs have an element of this: I write Rust and you can tell without looking when some tool is written in it -- TOML config? Apache-2.0/MIT dual-licensed? Few such decisions have no downsides, and yet exceptions are rare.

It takes a lot to make a choice for reasons other than your own familiarity. From the linked thread, they clearly agonized over this. Apparently they talked themselves down from the usual Lisp ultra-permissiveness on idents, so good on them -- but it was still explicitly a compromise between their comfort and the aims of the project.

There is an obvious and very good reason why the number of languages that do this can be counted on one hand. The authors know that. It is a problem they decided to accept because it was the default for them, and they "just couldn't give up on how pleasing hyphens in identifiers are to the eye and the shift-finger", and then mitigated it via other whitespace changes. They were content when "Anecdotally, no one in our courses ha[d] complained". I disagree with it. I find it really hard to read. I guess I am used to other languages, but so is everyone who has dabbled in python, likely writing primarily numeric code for their stats/bio/physics courses.

Consider me a student reporting it as a problem. Consider the 2013 thread a student reporting it as a problem as well. How about that?

I'll finish with a quote from Krishnamurthi himself: "Saying 'add spaces around binops' is easy to learn, recognize, and implement."

Does this not speak for itself?

    fun subtract(a, b):
      a-b
    end
    
    >>> The identifier a-b is unbound:        definitions://:3:2-3:5
    
    4 | a-b
    
    It is used but not previously defined.
Edit: I dug up an old thread from the guy @estebank who makes all the amazing Rust error messages, responding to a paper of Krishnamurthi's that argues, true to form, that you shouldn't ever actually "Say 'add spaces around binops'" (https://twitter.com/ekuber/status/1140791186858266624). The paper itself is an interesting read, particularly the interviews where beginners don't have the vocabulary to decipher error messages. But I think Esteban is right.


I understand where you're coming from. Feedback and engagement noted and appreciated.

But I stand by my remarks from 2013 in 2021. I've now had well over 1000 students go through Pyret (in addition to tens of thousands elsewhere). We've also spent hours and hours literally watching new learners work with the language. I can assure you that of the many, many, many issues that have percolated up to us, spaces-around-binops has literally not a single time been one. If anything, when people write that and we say "just put spaces around the `-`", the response is, "Oh, okay", and people move on.

So, we feel very good about this decision. And I, personally, actually really like how it makes code read.


Hey, thanks for the reply. The discussion has been centred on explaining why you have to type the spaces, but I never really cared about that -- my problem was with reading. That's my fault, the usual complaint about whitespace sensitivity is about having to type it, and this is different. I'm talking mixed kebab-idents and really any math operators in the same expression. Unrealistic example but: `(n-1 + n-2) / (n-1 - n-2)`. We may naturally differ on that through familiarity, but it really does get my brain stuck. It doesn't surprise me at all that the mechanics of adding spaces were so easy to learn, but I'm not totally convinced you would have received explicit feedback on readability from students who don't have much to compare to. On the flipside, if you don't get feedback on reading tricky numeric expressions, then they don't care, it doesn't come up for beginners as they're not optimising sqrt routines, and it doesn't matter to a language made for them. I think that's the end of the road.

On the binops, it may be that forcing whitespace around them is a good idea even for a language without kebab-idents, and perhaps the risk you took there was worth it to find that out. I could get around a world in which C's `a & b` and `&ptr` were completely incompatible. (I'm sure you could find a CVE or two for that one.) Heck, every code formatter out there does it. Compared to some suggestions around here and apparently back in 2013 as well to ditch BODMAS, the one bit of math that everyone on the planet learns in school, in favour of whitespace-sensitive precedence... goodness me. Give me forced spaces around binops any day.


> Does this not speak for itself?

If not, this does:

   $ cp a-b c-d # copy a-b file to c-d
   $ expr 1-1
   1-1
   $ expr 1 - 1
   0
You already know some language with whitespace delimiting such as, oh, the shell.


That could be fixed by checking the identifier for valid operators and offer: "Did you mean a - b?"


It seems obvious now, but that's not low hanging fruit, and even when you think to do it, it's not easy to do. It took and is still taking real ingenuity and hard work from Rust folks to do. Their syntax trees are insane, they carry so much information around to make these kinds of messages possible. Every source of ambiguity in intent adds to the problem. Making binops work only with whitespace is an easy change to the parser; telling people as you look over their shoulder is also easy. Producing this error message when it is valid syntax and means something else is a different beast entirely. For a general purpose language I just wouldn't recommend it when you have an easy choice available to avoid the ambiguity in intent. It will save so much work if you aspire to have better errors.


> Pyret is the result of decades of research in computer science education, helmed by Shriram Krishnamurthi, who was one of the original members of the Racket project (itself a language designed for CS education in the '90s, a descendant of Scheme with kebab case and prefix notation). The full list of authors of Pyret is lengthy, but includes a number of well-established researchers in the PL and CSed communities.

On one hand, this historical context is relevant. Thanks for sharing it.

On the other hand, when this section is taken in context with with the following sentence...

> Knowing who they are, I would happily assume that they spent plenty of time debating this exact issue of mixing kebab case with infix subtraction, and either decided the benefits were worth the cost or else decided that the cost was practically nonexistent.

...it looks the fallacy of appealing to authority.

Many people like to debate the ideas and tradeoffs, not the credentials of people involved.


Yes not requiring spaces around infix operators is terrible even when there is no ambiguity. Frankly, everything about programs-as-text is terrible for beginners, but this softens the blow.


I disagree that programs-as-text is bad idea for beginers. Anyone with a bit of abstract thinking can understand program as text just like how they can understand a multi step mathematical operation.


I'm no expert on education, but having been a TA and helped out a few people since, I repeatedly see text getting in the way of everything else.

Beginners often find both the things you are saying some what hard.

First, note that there is reading vs editing. Perhaps in grade school people learn to parse large expressions (though I find that people generally are not prepared well, because grade school expressions are too small). But editing code and keeping track of what changes you made --- essential to debug your first programs --- is much more new to people who are used to just rewriting a few short algebra expresions with pencils. It's then, when the beginner is most mentally taxed, that the syntax errors creep in --- and further interrupt their thinking process.

Hopefully we can agree the second challenge is completely fundamental to the field, while the first however is just an artifact of the way things are implemented today. Well, based on the above scenario and others I repeated see the cognitive burden of the first interrupting the second, and so student waste effort and loose focus over "easy stupid text syntax", and therefore long delay mastery of trees, term rewriting and substitution in particular, etc.


To be clear, I am no education expert either and it was my personal opinion. Maybe I have been around the people naturally good at math, and everyone found scratch clumsy and irritating than python. Python was pretty natural for them.


> In any case, I trust their decisions better than those of someone who glanced at the website only long enough to inform a condescending (and highly superficial) comment on Hacker News.

I didn't interpret it as condescending.

> (and highly superficial)

I didn't interpret it as highly superficial. Comments about the appearance (including whitespace, kebab-case, and infix operators) of a language are fair game, particularly if it is intended for teaching.


> Always requiring whitespace around infix operators is, in my opinion, easier to explain to people than "you can do whatever you want with the whitespace."

It might be easier to explain, but it also makes the code look ugly. Consider the expressions:

    a := b + c*d
    if e > f-g then ...
Here the * and - are more tighly bound than the + or >, and I'm using the whitespace to make that obvious.


But if you write instead

  a := b+c * d
you aren't going to get an error. You're just going to get awfully surprising behavior, because you may think you were expressing one precedence with the spaces but the language has its own mind and doesn't care.

In contrast, Pyret doesn't bind anything more tightly than anything else. You parenthesize to make your intent clear.

If your expression gets too large, you should consider breaking it down with names for the intermediates. That will improve its readability by others anyway.


Your argument is that a language shouldn't have a feature, because that feature can be misused in order to make the program hard to understand.

But this is a bad argument, as it could apply to any language feature. E.g. I could write a program:

    def add(a, b):
        return a - b
Here I've called it "add" but it does subtraction. By your argument, we should ban functions having meaningful names, as people could use a misleading one.

> Pyret doesn't bind anything more tightly than anything else

So all dyadic operators have the same precedence, like in Smalltalk? How horrible.

> You parenthesize to make your intent clear.

More likely I write my code in something other than Pyret, to make my intent clear.


A space-aware syntax can allow optional spacing around operators but make it an error if it does not respect the precedence hierarchy or is deemed inconsistent by other rules.


I believe fortress did that.


Beauty lies in the eye of the beholder, ie aesthetics are subjective. For me, using proximity instead of parentheses to signify grouping would raise an eyebrow. I'd make an exception for cases of base*exponent, i think Python has it?


Need to reply to myself for correction since it is to late to edit.

It should say base*exponent.

I think other languages use base^exponent, which would be fine with me too.


> Beauty lies in the eye of the beholder, ie aesthetics are subjective.

Then allow everyone to do things according to their own aesthetics.


I personally appreciate when languages have a consistent style. I think on the whole it helps readability more than it hurts.

We're enforcing whitespace around infix operators in Vale, and it's working pretty well. It's also enabling us to use <> for generics with no ambiguity =)


Sure. In math education i learned to signify grouping with parens and in this case i would favour function/consistency over form/typing efficiency.


Raku has kebab-case as well as infix subtraction. Sigils disambiguate when the operands are variables ($foo-$bar), and callables must be made explicit:

    sub foo-bar { 10 }
    sub bar-foo { 7 }
    say foo-bar-bar-foo; # Undeclared routine error
    say foo-bar()-bar-foo; # 3


Or just do:

    say foo-bar - bar-foo;
which would improve readability as well!


In mathup[1] I actually use whitespace around infix operators to group expressions. E.g. `1 + 2/3 + 4` is not the same as `1+2 / 3+4`. I find this leads to a much cleaner expressions.

When you are designing your own language, you can make these choices. The author of Pyret obviously thought that clearly named identifiers were worth it.

1. https://runarberg.github.io/mathup/


Dylan had kebab-case idents and infix operators. It worked okay because you have to separate infix operators with a space anyway; omitting it is a syntax error.

In practice I doubt this would trip me up because every coding standard I've worked with has mandated spaces around infix operators. I suppose it might be a lot more confusing if you're used to coding standards that allow you to omit them, but my understanding is that's a minority.


Same thing in Pyret, it's a syntax error to not put a space around infix operators.


Maybe it would be okay with syntax highlighting and a careerful of code formatting muscle memory, but how would that fare in an educational context?

> "Pyret is a programming language designed to serve as an outstanding choice for programming education"


Only having one correct way to do something with very simple rules (always put whitespace around operators) is easy for students. It's when the rules are complicated with exceptions and multiple correct ways that things get hard to learn.


It's been working great, actually.

https://news.ycombinator.com/item?id=28265117


> It seems pretty bad to have kebab-case identifiers in a language that also has an infix subtraction operator -.

Indeed. What's wrong with using _ in identifiers?


> Indeed. What's wrong with using _ in identifiers?

Nothing wrong with it. It is usually easier to type '-' though. Use in when naming scripts and files for the same reason. Ergonomics matter IMHO.


    fun to-celsius(f):
      (f - 32) * (5 / 9)
    end
It looks like abbreviated python.


It's more functional than python. The data construct is similar to variant types in OCaml.


Yep, to me it looks like OCaml made prettier (scriptier?) with ideas from Racket, Python and Ruby.


guess I'm more used to lisp-y type fp.

    (* (- f 32) (/ 5 9))


My criticism is that this language is very similar to Python and might very well confuse students who ultimately have to switch to Python or other languages.


Students switching from this to a "real world language" would be disappointed if it does not have Algebraic Data Types, pretty sure about this. Once you learned about them in SML, OCaml, Haskell or one of their descendants, which include Pyret, Elm, ReasonML/ReScript, Rust and Purescript, you feel lucky if you can use Kotlin and approximate them with sealed classes. We will see, maybe C++, Java and C# add them in the next 5 years.


C++17 added std::variant (https://en.cppreference.com/w/cpp/utility/variant) and other types so algebraic data types are now officially part of the standard. Of course, it's not as compact to write as in Pyret, especially since there's no pattern matching switch statement (yet).


they do have the godforsaken visit function which requires you to write the lambda syntax for each variant type, I end up writing chained if’s 99% of the time because it’s clearer.


There's another way using switch if I recall properly.

That said I don't like C++'s library additions just to stay modern. They are not expressive, increase build times, make debug builds slower and can fail compiler optimizers.


If it helps to think about the issue, we have a (free, online) book that manages this transition, so we've thought about this quite a bit and one co-author has been teaching a Pyret -> Python flow for several years. [https://dcic-world.org/2021-08-21/part_intro.html]


To me it looks a lot more like Haskell than anything, e.g. :: for type signatures, sum types, pattern matches, etc.

And some Ruby with the end keyword to close blocks.


Honestly this looks much closer to ruby than Python, implicit returns etc are very unpythonic, it’s not white space sensitive, etc.


If you only know python...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: