There are benefits to unambiguously marking the end of each statement with a semicolon rather than just using a newline. Is there a good algorithm for determining whether or not a newline actually represents the end of a statement?
JavaScript gets this wrong in both directions, sometimes unexpectedly ending a statement (e.g. in `return \n 0`) and sometimes unexpectedly not ending one (e.g. when a new line begins with an open-parenthesis).
Python's method (newline always ends a statement unless it's inside (), [] or {}) is straightforward, but makes the language syntax strictly line-based. This matches Python's significant indentation, but can it work in a language without it?
Another option I've seen is for all newlines to end statements, unless they follow a token that cannot end a statement. Unfortunately that means that the following two assignments have different behavior:
foo = bar +
baz
foo = bar
+ baz
The first is a sum, the second only assigns `bar`, followed by a standalone unary `+` expression. Go works this way, but considers the second form to be an error ("+baz evaluated but not used"). Python considers both of these to be errors.
True, and these kinds of bikeshedding discussions about tiny details are infuriating because they're so irrelevant. I wish we could all rise above them to discuss the next level of expressibility.
We let the little stuff suck up so much of or time.
JavaScript gets this wrong in both directions, sometimes unexpectedly ending a statement (e.g. in `return \n 0`) and sometimes unexpectedly not ending one (e.g. when a new line begins with an open-parenthesis).
Python's method (newline always ends a statement unless it's inside (), [] or {}) is straightforward, but makes the language syntax strictly line-based. This matches Python's significant indentation, but can it work in a language without it?
Another option I've seen is for all newlines to end statements, unless they follow a token that cannot end a statement. Unfortunately that means that the following two assignments have different behavior:
The first is a sum, the second only assigns `bar`, followed by a standalone unary `+` expression. Go works this way, but considers the second form to be an error ("+baz evaluated but not used"). Python considers both of these to be errors.