Haskell's syntax without Haskell's type system sounds like a nightmare. The type system is what makes me at least vaguely confident that the incomprehensible string of symbols that make up a good chunk of Haskell expressions are at least close to being correct.
(And to be clear: I use Haskell professionally and love it - but the syntax is _not_ why most people use Haskell).
I can’t identify with this description (“incomprehensible string of symbols”) at all, unless maybe you’re talking about Lens-based code or something.
I think most people who complain about the readability of a language like Haskell are confusing familiarity with clarity. We learned Java or C++ or whatever in school, and despite the fact that those objectively have more complex syntaxes and more meaningless symbols, they are “easier to read” because that’s what you started with. Haskell has a higher semantic density with fewer boilerplate symbols - surely easier to read from a first principles standpoint. If you started from a math background and are comfortable thinking in terms of equations, it’s almost certainly easier to read.
You can write Haskell defensively on the syntax level.
But it's much more common, at least for me, to eg just guess at operator precedences and leave out parens, because for the most part the type system will yell at you, if you get it wrong.
This is in stark contrast to the likes of C or C++, where the rule of thumb is 'when in doubt, add parens', because even when you get the precedence right when writing the code after looking it up, you'll most likely will have forgotten by the time you or someone else reads the code again; so you save everyone time with the extra parens.
The main thing I'd want from a better Python syntax would be to make more constructs in the language expressions with values instead of statements. For example, if-then-else. And introduce pattern matching.
The surface syntax might want to take more inspiration from Lisp instead of Haskell. Lisps are traditionally dynamically typed, so you can look at quite a few already thought through solutions.
I dabbled a bit in dogelang and the "great" thing is that it almost is Haskell but not. You can deconstruct tuples which looks like pattern matching but it isn't. You can use generator lazyness which looks like lazy and non-strict evaluation but the generator has an internal state.
It's amazing if you really try to write something haskell-like where it actually breaks.
This looks amazing!!! I am also working on a statically-typed language that aims to be fully inferred and have no type annotations, I will definitely be gleaning Ikko's design and your blog for insights.
One problem with Python's syntax, though, is that you can't have anonymous lambdas which span more than one line (unless perhaps you start using escapes, but that would miss the point).
I'm not sure what you mean by escapes, but you can use parentheses as block delimiters, "and" and "or" as a semicolon, and ternary operator for conditionals
>>> f = lambda x: (
... (
... print('a') or
... print('b')
... ) if x % 2
... else (
... print('c') or
... print('d')
... )
... )
>>> f(42)
c
d
>>> f(43)
a
b
Oh please. You can "do" functional programming just fine in Python. If it's longer than a line, give it a name. It's not hard and it makes the code more readable and easier to maintain.
I do most of my work in multi-paradigm languages–so it's not like they're specifically designed around the ergonomics of working with functional programming–but Python is probably the worst language that I have used that could reasonably say that it supports this style of work. It's like it actively tries to make things strange and annoying to convince you to use other constructs (such as list comprehensions). Why can't I have a multiline lambda (…some things have no need for a name)? Why are all the functional methods free functions that take the list as the second parameter? Why was reduce arbitrarily excised from the list of built-in functions and put inside functools? Why can't I use a member function or operators as parameters without hacks like operators.__{operator_thats_a_reserved_word}__? Why do all the Python 3 functions return generators by default, when Python never really cared much about performance in the first place? It's just a death by a thousand cuts that always makes me regret even trying to do functional programming, and deep down I actually think the Python authors just don't want you do be doing it because they think it would lead to what they believe is difficult-to-understand code. Honestly, I think even C++ has better functional programming constructs than Python, even with its ugly-looking lambdas and lack of chaining, because <algorithm>'s extensiveness makes up for some of the language's warts.
In defense of Python, at least it supports closures that easily refer to variables higher up in the stack, whereas in C++ you always have to think about the lifetime of variables. Your other points I agree with.
You can still only have one statement per lambda. And lots of Python's constructs are unfortunately statements and not expressions.
The walrus operator moves Python closer to the expressions-camp.
In practice, the bigger problem with a more functional Python is that the ubiquitous dicts don't have nice and concise operators for persistent operations. Eg you can concatenate two lists via +, but dicts are more cumbersome.
Though that improved a bit, {dictA, dictB} mostly solves that problem. But more operators would be useful.
And, of course, lack of tail call elimination puts a damper on things. You can mostly work around that problem, but eg implementing state machines as mutually recursive functions is going to be a pain.
Most of python’s syntax (partial if-statement, return, break, for-loop, etc) don’t make sense as first class terms in a pure language. You can’t represent the semantics of a Haskell program with something that looks anything like idiomatic python.
I think you might be interested in looking at multi-paradigm languages that use static/inferred typing and combine the functional and imperative paradigms. Some worth taking a look at might be OCaml, Nim, and F Sharp. Nim leans more towards Python, OCaml and F# lean more towards Haskell.
Ocaml is really good with nice tooling like merlin. Compiles to native as well. There's reasonml for people who favour js syntax but I have come to like ocaml syntax.
On a serious note, I've long wondered what python would feel like with haskell-style "whitespace" to call/partially call functions. It feels like it wouldn't be super difficult to get working; as far as I can think of "identifier ws identifier" is always currently a syntax error? Maybe a toy python project someday...
Python has enough vararg functions to make this not doable without coming up with some special-case syntax. There's a reason you don't have varargs in the Haskell world--currying isn't a special case, but rather happens any time you have multiple arguments. A three-argument function with type (A -> B -> C -> D) is actually syntactic sugar for a function that takes an A and returns...
a function that takes a B and returns...
a function that takes a C and returns...
a value of type D.
This is determinable by just looking at the number of arguments provided. But a variadic function, in a world with currying, would be a function that takes an argument, and returns...well, it might be a function to take another argument, and it might be the actual computation.
Even if we didn't have currying, we can't just use whitespace in a Python-like language. We don't know if (print) is actually a function call, or just referencing the function object (perhaps to pass it to 'map' or something). Your syntax needs to distinguish between those two cases.
There are varargs in Haskell, but they aren't as natural as in Python; and usually rely on various amounts of type level trickery.
Implementing a vararg printf in Haskell is a popular enough exercise.
A relatively common examples in practice is the testing library QuickCheck: your properties (ie tests) can take any number of arguments that the library fills with samples inputs.
i thought i disagreed with you, but then i realised you're referring to the possibility of a unary procedure call in python.
but there are strict impure languages with white space function application. i suppose they handle the issue by using an object of a type with only one value.
in principle, you could handle the issue by contrasting (print) with (print ()), with () a pseudo value only existing at syntax level.
i'm not sure if white space function application carries it's own weight once we abandon partial application, so this feels like a contrivance to save a bursted balloon.
I believe this is how it's called in non-English languages, but all my instructors and PL/semantics textbooks referred to this process as "currying" [0]. Which, if that's what you meant, is not what your parent comment was talking about. I believe they were talking just about syntax, not semantics, i.e., they wish in Python you could do `foo x y` instead of `foo(x, y)`.
Actually, that's a common misconception. If you're calling a function with a partial argument, it's called "partial application." If you're turning a multi-arg function into a single-arg function that returns another single-arg function, it's currying. Currying doesn't apply a function, it just transforms it.
Fun fact: you can make Haskell support implicit string concatenation, but it's a bit ugly and not advised.
The route is via the OverloadedStrings extension, and then implementing an instance for IsString (String -> String) and IsString (String -> String -> String) etc.
The learning curve is pretty intense, but I do want to say that the operators you use here are actually pretty great for readability after you learn them.
I don’t know if that makes them worthwhile, but I always find them a breath of fresh air.
A lot of Haskell's more controversial constructs are about being able to hide plumbing.
Applicative plumbing is especially easy in structure, so removing it from you doesn't hide any complexity from the experience programmer. It's easy to reconstruct the plumbing on the fly in your mind, if you want to reason about it.
Other tools like lenses or arrows take a lot longer for a lot of people to internalize like that.
I've been wanting this for a while, mainly because I often find myself writing long function composition chains that end up being a mess of parentheses. Python is good at encouraging functional style but sometimes it ends up being too much to be readable with the normal syntax.
I'm a big fan/proponent of Hy (http://hylang.org), a Lisp built onto Python. Even going to the somewhat irresponsible extent of using it in production!
Another language that is similar, but with more an ML syntax is MLite (https://www.t3x.org/mlite/index.html). Like this, it's an interpreter with dynamic typing, and probably not that useful, but it is fun to play around with functional programming with.
> A programming language that compiles to CPython bytecode, much like Scala compiles to JVM's.
I'm wondering if the CPython bytescode is stable though. Is the Python bytecode spec managed independently from the language version (like JVM) so this kind of approach won't break when a new Python is released?
I'm not sure what's funnier: that it exists or that some people still haven't heard of Poe's law. Or did a sense of humor pervasively roll on its back, cross its arms and legs, and replace their eyes for X's?
"Solutions at the time included various attempts to compile Haskell to JavaScript while preserving its semantics (Fay, Haste, GHCJS), but I was interested to see how successful I could be by approaching the problem from the other side - attempting to keep the semantics of JavaScript, while enjoying the syntax and type system of a language like Haskell."
It's a good reason for the author of the language, and for anyone who's interested in designing programming languages. However, I often wonder why people in the industry promote it as a great choice for developers who are looking for a "Haskell for frontend", instead of recommending GHCJS. I do understand why Elm is recommended instead of Haskell for simplicity reasons, but PureScript is neither simpler nor (presumably, if I'm not missing something) conceptually more valuable (i.e. bringing refreshingly new ideas) than GHC Haskell.
Purescript does have algebraic effects and row polymorphism generally, which Haskell does not AFAIK (I don’t do as much Haskell as I would like so I may be wrong.)
Algebraic effects are not part of the language implementation, but there are libraries like fused-effects[1] that implement them. Row polymorphism has been proposed a few years ago [2], but as with any community effort, the timeline and possibility of the actual implementation depends on the amount of attention put into it instead of some other (probably important, too) work.
(And to be clear: I use Haskell professionally and love it - but the syntax is _not_ why most people use Haskell).