Hacker News new | past | comments | ask | show | jobs | submit login
The Flix Programming Language (flix.dev)
222 points by sivakon 9 months ago | hide | past | favorite | 121 comments



The website is infrequently updated, so let me provide some information about what we are currently working on:

- We are trying to make the entire compiler resilient (error-tolerant), incremental, and parallel. We have managed to make every single compiler phase (of which there are 28) parallel. This has already led to significant speed-ups. We are now trying to increase the degree of parallelism within each phase. We are also working on error resilience to provide LSP support no matter what errors a program contains (syntax, naming, type). For example, it should be possible to rename a variable even if the program currently has multiple errors.

- We are adding support for algebraic effects and handlers. This will allow users to define and handle their own effects.

- We are also exploring a novel way to combine type classes ("traits") and effects.

- We have recently added support for package management and integration with Maven.

In summary, we are already in a great spot, and useful programs can be written in Flix today. I encourage you to check out the documentation: https://doc.flix.dev/ and to try Flix!

(I am one of the developers of Flix).


> recently added support for package management

Are there any [plans for] supply chain attack mitigations?

Naively searching, I find https://github.com/flix/flix/issues/4380#issuecomment-123641... (Proposed Principle: A package can be declared as "safe") and https://github.com/flix/flix/issues/2837 (Add capability-safety to polymorphic effects?) the latter closed with working on something related to this https://github.com/flix/flix/issues/3000 (The Road to Algebraic Effects).


In short yes. We plan to leverage the effect system for this. Stay tuned.


That's great, staying tuned!

Plea to all language/languge ecosystem designers in 2023+ to design ahead for supply chain attack mitigations. Austral is one new language that appears to be doing so https://borretti.me/article/how-capabilities-work-austral ... Kudos to those retrofitting to existing ecosystems. I guess Flix is somewhere in the middle, not new (2015?) but still being designed/not huge legacy constraints. Anyway, thanks!


Is the compiler written in Flix?


The Flix compiler is written in Scala. The Flix Standard Library and runtime, which includes a Datalog JIT, is written in Flix. GitHub does not yet recognize .flix, so the numbers reported are an inaccurate representation of the actual code in the repo.


Looking at the GitHub repo, it’s written in Scala.


Ok, then I'll wait until they start dogfooding ;)

Looks incredibly cool though.


It doesn't make much sense to self-host compiler unless there is a significant community (resources) or you don't want to write another compiler in it.


looks very cool! has there been any talk of native codegen/backends and integration with newfangled cool kid ecosystems (rust, go, etc)?


We may consider WASM as support for GC, tail calls, and multi-threading arrive.


> Region-based Local Mutation

> Flix supports region-based local mutation, which makes it possible to implement pure functions that internally uses mutable state and destructive operations, as long as these operations are confined to the region.

> We can use local mutation when it is more natural to write a function using mutable data and in a familiar imperative-style while still remaining pure to the outside world.

> We can also use local mutation when it is more efficient to use mutable data structures, e.g. when implementing a sorting algorithm.

Another language with a feature like this is F* (or FStar), but I think it uses a different kind of compile-time analysis.

Haskell actually kind of lets you do it through the ST monad (there's a function called runST that turns mutable code inside the ST monad into pure code). But F* (and Flix) can do it implicitly


I wish Erlang/Elixir would allow this.

An Erlang/Elixir process is single-threaded by definition, so there is no contention and in-place mutation could be allowed.


Related. Others?

Flix – Safe, reliable, concise, and functional-first programming language - https://news.ycombinator.com/item?id=31448889 - May 2022 (42 comments)

Flix – Next-generation reliable, concise, functional-first programming language - https://news.ycombinator.com/item?id=25513397 - Dec 2020 (84 comments)

The Flix Programming Language - https://news.ycombinator.com/item?id=19928153 - May 2019 (2 comments)


I absolutely love the idea of embedding Prolog-esque rules inside a full traditional programming language for solving specific problems. The region-based local mutation is also an incredible idea that would really solve one of my core pain points with immutable languages like Haskell and Elixir.

Also this part made me let out an audible "wow":

> We can exploit purity reflection to selectively use lazy or parallel evaluation inside a library without changing the semantics from the point-of-view of the clients.

Will definitely keep an eye on this language! The syntax seems a little odd (inferring type parameters for functions?) but I'm sure I could get used to it.


> The following design choices may be considered controversial by some:

> Dividing by zero yields zero. https://www.hillelwayne.com/post/divide-by-zero/

That's one interesting point I would love to read any opinions on.

Edit: Here's a HN discussion about the `divide by zero` article - https://news.ycombinator.com/item?id=17736046


There's a brief description of the same choice[0] made by the Pony language.

[0] https://tutorial.ponylang.io/gotchas/divide-by-zero.html


FWIW, I tend to write a lot of error checking code that checks for division by zero ahead of time and just returns zero instead. Found an example:

        public double magnitude
        {
            get
            {
                double c = max_magnitude;
                return c > 0 ? Math.Max(c, c * (this / c)._internal_magnitude) : 0;
            }
        }
I could delete quite a few if statements in pretty hot codepaths if division by zero just returned zero.


This feels like an area where extended/vector instructions could be added to make this fast too. Probably not news to you, but for example, NEON (and likely AVX, I’m just a lot less familiar with it) has saturating addition and subtraction.


This is notable too:

In the year since I wrote this post, Pony added partial division. I still don’t know anything about the language but they’ve been getting grief over this post so I wanted to clear that up.

https://tutorial.ponylang.io/expressions/arithmetic.html#par...

I think Hillel's post was just saying that 1/0 == 0 doesn't lead to any mathematical contradictions.

Another way to think about it is that I believe 1/0 == 42 and 1/0 == 43 are just as valid. They don't lead to contradictions.

[In ZF set theory] Since 0 is not in the domain of recip, we know nothing about the value of 1 / 0; it might equal √2, it might equal R, or it might equal anything else.

But I don't think you want to use a language where 1/0 == 42, and likewise for 0.


One problém of Flix has been the split of the world in "normal" und "graded" type classes, because of the effects as types. Is this solved now?

Example see: https://github.com/Release-Candidate/flix-test


Top marks for the website - passes all the tests:

* Example at the top

* Link to playground

* Explanation of all the features, with examples!

* Says which features are unique

The only thing I was found wondering was "why datalog"?

Language looks pretty good.


Datalog is highly undervalued IMHO. You can describe quite complex relationships in your data, without telling the computer _how_ to realise them (as in imperatively). In fact, there are different strategies (forward vs backward chaining) where you may never realise the data at all.


Woah, the effect system looks really neat at first glance. Also, “region-based local mutation” so your pure functions can use mutation under the hood for performance? Sweet!


I'd like to see an experimental language that leans hard into the concept of controlled mutation.

I always say that purely in terms of design my ideal language is high-level Haskell, low-level C. Conceptually, purely functional design is how programming "should" (note the quotes) be, but doing so down to the level of functions is both not very practical (some algorithms are just easier to express in terms pointers moving around rather than folds, reduces and the like), and makes it hard to reason about performance (especially memory, and most especially if you throw laziness in the mix).

But then again, I write my own stuff in python because I'm a lazy fuck, so probably it's not meant to be :(


That does exist, it's called Koka: https://koka-lang.github.io/koka/doc/book.html


To elaborate on the controlled mutation theory, here is a paper they wrote about FIP (Fully In-Place) programming https://www.microsoft.com/en-us/research/uploads/prod/2023/0...


Thanks! I'm not familiar with that, I'll try to look it up.


> I always say that purely in terms of design my ideal language is high-level Haskell, low-level C.

That's similar to what I used to say. I was devastated to see the BitC project implode, but then Rust appeared and took up the mantle. It's not perfect by any means but it's influential enough to drag the whole field of PL development kicking and screaming in that general direction.


I agree in re Rust.

In spite of all the retarded monkeys in its fanbase, I always thought it was an interesting project. Popular languages today are much more similar than they used to be, they are basically "converging", Rust's willingness to try something new must be applauded.

Still many pain points, still a bit of a "puzzle language", too much of a scatterbrain approach in the governance and design direction, but even if it had no other influence than to force people to think about the problems of low-level programming, that would be still a massive plus in my book.


Another alternative to this that seems quite promising is mutable value semantics (e.g. https://research.google/pubs/pub51426/).


It's Rust.


Rust sucks for functional programming because it has no GC. While you get controlled mutation, you don't get anywhere near the ergonomics that it requires.

The reason why Haskell and other popular FP languages all require a GC.


The two have very little to do with each other. What one could say is that higher level functional-like patterns are less idiomatic in Rust, because they abstract away from the low-level control that Rust also provides - which is why Rust chooses to provide, e.g. GAT's as a language feature as opposed to HKT's.

But functional programming is fully supported and lack of GC is not an issue, since you can use Rc<> and Arc<> where needed. (Support for 'pluggable' tracing Gc<> will probably be added at some point, but it will need local allocators, which are not a stable feature yet.)


Oh yes, some basic form of FP is supported in Rust, but it's not the real thing. You can't write idiomatic FP because you really need a GC for that as the reference patterns created by closures are always cyclic. There's a reason why Haskell needs a full-fledged GC.


Have you tried F#? Can recommend you give it s try.


do functional languages mainly appeal to computer language enthusiasts/researchers? im just not seeing the benefit personally.


Functional Programming was, for a long time, talked about as yet-another-solution to solve the issue of complexity in larger codebases, primarily the complexity of controlling state getting out o hand.

Similar to OOP, which promised to do this by encapsulating state, FP promised to do this via purity, aka. getting rid of as much state as possible, and only allowing stateful transition at certain well defined sections of the program.

The "market advantage" of OOP was that, via Java, it was already so well established, and so many coders had been trained in OOP languages, that it remained alive. FP on the other hand, coming out of academia and requiring all these industry people to suddenly do things in syntactically and conceptually different ways, never really gained traction. OOP simply came first, it is as simple as that.

Whether FP would have actually solved the problem is anyones guess, since it never gained the traction of OOP and Procedural languages. My best guess is that it wouldn't, because I don't believe in silver bullets.

It should be noted that both approaches contributed valueable things to contemporary languages. E.g. first order functions being the norm comes from FP.


> OOP simply came first

Not really. Lisp is a functional programming language and has existed since at least 1960. Some claim there were many other proto-functional languages since the early 60's, and the FP language [1] (a clearly functional programming language and the result of the famous paper "Can Programming Be Liberated From the von Neumann Style?") appeared in 1977 - was inspired by much earlier efforts like APL.

OOP really only became a thing with Simula in 1967, but was not popular until the 1980's with Smalltalk and Common Lisp's Object System (CLOS) came about (so yes, there was a OOP/FP hybrid already decades ago), and then C++ and finally Java much later... at which time Functional Programming languages already included Miranda (1985) which later evolved into Haskell, and Erlang (1986). That is, FPP languages were at least as common as OOP languages by the 80's.

As far as I know, however, pure functional languages were not really very efficient until Haskell came about, while OOP languages were nearly on par with procedural style: which mattered a lot in 1980's machines.

[1] https://en.wikipedia.org/wiki/FP_(programming_language)


Lisp is not a FP. It's a "List processing language" with some features based on Lambda calculus.

Real FP means referential transparency and that started with languages like Miranda and later Haskell.

What many people consider "FP" today is in fact just procedural programming with higher order procedures and lexical closure.


Lisp is even not a language, it's a family of languages. It started as a "LISt Processor", but it

A lot of early FP teaching was done in Lisp. Some kind of "Pure Lisp" was used, which are imaginary subsets of Lisp, restricted to side-effect-free, non-destructive functions.

FP started quite a bit before Miranda.


> Not really.

*sight*

Okay, let's dot the i's and cross the t's then, shall we, and include languages used long long before software development became the industry behemoth it is today.

You're right. There, I said it. You are absolutely right.

Many of the concepts of functional programming were indeed pioneered in LISP, and it was indeed specified in 1960, 7 years before Simula.

But hey, let's dot another "i" and correct my statement. Because, FP isn't in contrast to OOP so much as it is in contrast to imperative programming, and all it's mutable state, right?

So, is there an imperative programming language, older than LISP, and still in use today? And the answer is: Of course there is ;-)

https://en.wikipedia.org/wiki/Fortran


This was not a pedantic comment. Saying OOP is more popular because it just came first is a complete misunderstanding of history. In fact, Lisp was much more popular than any OOP language for a long time, OOP only took the world by storm with C++ and Java, much, much later.


> Saying OOP is more popular because it just came first is a complete misunderstanding of history.

No it isn't, because the history relevant for this question doesn't begin at the very start of programming. Most programmers active today didn't go to university in the 60s. In the timeframe relevant to answer this question, the people were indeed confronted with OOP languages, and FP languages were already a niche topic, end of story.

This also coincides with programming, as an activity, massively gaining in popularity across many industries.


There's a difference between saying "for most active developers, they learned OOP first" and "OOP came first"


I'm going to guess that most programmers active today didn't go to university in the 1980s, either (that would mean they were born in the 1960s, which would put put their current age somewhere in their 60s). So there's no point in ignoring the 60s and starting history during the 1980s when most programmers didn't start programming until later decades.


Lisp is not a functional programming language.


People are now trying to redefine what functional programming means. Lisp is as close to the lambda calculus as you can get, and Functional Programming is based on lambda calculus mode of computation (VS Von Neumann procedural model).

When you write Lisp, you're mostly composing functions. It's truly very functional, just not purely functional which is probably the motivation behind your refusal to include Lisp in the functional programming language family, which would horrify the founders of the field as it's a simple attempt at redefining a widely used, well understood term to become a small subset of it for reasons of gatekeeping what we should, according to you, include under the term without any real technical reason behind it.


Functional programming “won” in terms of getting many of their features implemented into mainstream programming languages. This is a paradigm, it doesn’t require 100% pure usage. It allows for additional safety in parallel contexts.


You need a pragmatic combination of functional and non-functional.

Many functional zealots aim for a purity beyond all reason and comprehension. When what you really want is something like Erlang/Elixir or even C#.


What you want is something like Kotlin.


I would say F# or OCaml, or Flix as we see here--this language may not be for you.


Unpopular opinion, functional programming style got a big push because the inability of languages and compilers to deal with state especially mutable state, so they bring out the purity big gun and ban all mutable states. However, state and mutable state are natural and useful for programmers to work with, thus functional programming never gains mainstream.

With the advance of lifetime analysis, mutable value semantics, and local mutability, there’s no need to ban mutable state outright. The push for functional programming will be waning in regarding to purity.


Do you worry about side-effects that leak beyond the scope of the routines you're authoring?

Why / why not?


i do, does functional programming make this a non issue? maybe im just not smart enough to wrap my head around it.


Generally, yes. Some languages are really hardcore about it (notably Haskell), while others just strongly encourage you to write functions without side effects. Ed: that's the "purity" they're talking about in OP.

The other major neat thing about functional languages is how they let you treat functions like objects, but most modern programming languages have adopted that as well, so it's not a big differentiator anymore.


Yes and no.

In theory, writing in a functional language that allows only "pure" functions (aka. functions w.o. side effects), makes it easier to control state.

In practice, side effects exists and are required for programs to do anything useful.

In my opinion, one mistake of many purely functional languages was to be so focused on this purity, that it made it needlessly hard to write useful code in them, especially for people coming from an imperative/procedural/oop style of doing things. And you need these people if you want your method to gain traction, because the vast majority of code written, is imperative.

The irony is, that FP could probably have had a lot more success if it didn't clamour on about pure functions so much, and was less focused on implementations (aka. languages) than on methodology (aka. coding style) Because it is perfectly possible to write pure functions in most languages, including OOP language, even if those functions are not "pure" internally, or are not "pure" all the time and under all circumstances.

And yes, doing so has really nice advantages. I have refactored quite alot of codebases into using a more functional approach, and what I found was that this makes it harder to introduce bugs, makes it easier to track bugs, and makes it easier to reason about my code.

So yeah, functional programming, used if and where it makes sense, does work, and is useful.


> The irony is, that FP could probably have had a lot more success if it didn't clamour on about pure functions so much

Except functional languages like Lisps and the MLs never were pure, the only (used by a significant number of people) has been Miranda/Haskell (ignoring Coq and the other proof assistents). Or, to put it in other words: ML (no, not that ML) turned 50 this year, Scheme is oder than 50 too and Miranda/Haskell ~36. There never had been a shortage of "impure" functional languages since OOP existed.


Yes, there have. And not a single one of them was able to even gain a sizeable fraction of the mindspace that imperative languages have, let alone replace or obsolete even a single one of them.

So maybe it's time for FP as a whole to accept the fact, that there seems to be something fundamental about the way it's paraded implementations look and feel like, that puts off a lot of programmers.

Maybe it's time for FP to accept that the paradigm as a whole has a lot to contribute that is useful to everyone, but doesn't need a new language with largely different syntactic constructs to do so.


Strict/constrained things are generally less appealing. That does not mean they are not the right approach. I dont agree with the popularity contest approach though.

Many languages are in use for different things without the need for a language to win. It is a bit anthropomorphic to approach tools like that IMO.

Instead, there has been a healthy influence of more research FP languages into mainstream languages (as you mention), more interesting experimental languages, etc. Aka everything working as intended.


> I dont agree with the popularity contest approach though.

Insofar as it didn't prevent the good parts of FP to become mainstream features, I agree.

But the popularity contest IS important for language emergence. I still believe there would be a place for a functional language that strives to adhere to the principles more stricly, and that programming could benefit from such a language gaining a lot of traction. But, as mentioned before, if such a language requires people to deal with a very different syntax, or applies it's principles too rigidly, it will likely fail the popularity contest.

It also is important for paradigms to become pervasive. As I mentioned, FP has a lot to teach even to OOP people. Writing functional code is a great way to organise a program.

But tell that to a young software developer who has only ever been served enterprise spaghetti OOP pasta with extra abstraction sauce garnished with pseudo-encapsulation cheese, and for whom "functional programming" is something he only saw ridiculed as a meme on some subreddits.

No, popularity isn't the only important thing. But it can help things to reach their potential.


Lisp is not a functional programming language.


> FP could probably have had a lot more success if it didn't clamour on about pure functions so much

Note that there are many functional languages which don’t care about purity at all: for instance, Scheme, OCaml, SML and Elixir. I get annoyed when people confuse Haskell for the broader paradigm of functional programming. (Even though Haskell is my main programming language!)


An "unpure FP" is a procedural programming language, because "unpure functions" are generally called "procedures".

But people tend to avoid the name "procedural" at all cost - which is bad because procedural programming really has it's advantages and should be clearly separated from FP which also has certain advantages.


Haskell makes it a compile-time issue.

If you declare a function effect-free, and try to perform effects inside it, then you'll get a compiler error.

You are still allowed (and it's common practice!) to declare your functions as effectful.

Unfortunately the 'signal' of this message is often lost in the noise: For every Haskeller happily writing effects, there's 9 non-Haskellers writing that 'Haskell's big mistake was being pure and not allowing effects - you should use X instead'.


Functional languages that require calling out side-effects (eg: "\ IO" in Flix) allow you to write functions that are "pure" (aka: they can do no mutation of data or perform IO) and will always give you the same result for a given input. Writing the majority of your code as pure functions makes the code much easier to reason about, especially as the system scales. IMHO, it also makes testing simpler.

It's definitely a different way of thinking and it has a lot of benefits. However, some of the downsides (which Flix seems to fix with region-based local mutation) might be implementing a sort that keeps two copies of a list in memory just to return the second list and discard the first. Depending on your list, this may not be an issue.


I don't worry about side effects because in practice they don't cause me many problems (although I do focus on isolating mutable state). But if I thought about it a lot then I might start to worry about the theoretical possibilities. So I think the question is well phrased. "Worry" is mainly psychological.

Other advantages of pure FP might be thread safety but queues take care of most of this (and often locks are not hard to use). Or low-level compositionality. But as Alan Kay said, what we need are bigger objects. John Ousterhout seems to agree. Re-use in the small increases complexity.


> I don't worry about side effects because in practice they don't cause me many problems

You've selected for the work that you already do.

If you wanted to use transactions, you'd find them impossible, give up and go back to not having transactions.

And once you give up on the impossible dream of transactions, you're back to no problems in practice.


I think for me the appeal is that just that all the big brain programming language research seems to happen in weird new functional languages. I assume they're just a better petri dish for experimenting with weird shit, or maybe they come up with all the weird shit to be able to get things done in the Haskell-du-jour, or maybe weird shit just gets branded as functional programming by default. (These days the fp people always talk about linear types, or effects, or other things that don't seem to involve functions at all, idk what's up with that...)

I assume that all of this gets funding because ten years later it makes C# programmers more productive, not because of mass appeal.


Oh, Datalog within the language? Exciting!

I've been looking for such a feature for some time. Do I understand correctly that the facts can be established dynamically from, say, IO functions?

Also, are you going to present at FOSDEM?


Datalog! I especially like that in the smorgasbord of features. I'm still not convinced having a sentient AI encoded in the type system is what the doctor ordered for industry at large.


Great, But \ is ugly


That's just what I was thinking too haha. Honestly I really like the rest of the syntax though. (And of course, the type system features look neat but it's easier to have an opinion on syntax.)


Your username matches your comment!

Also, I agree but I'm not sure what I would propose as a better token. Maybe another colon?

def printAndInc(x: Int32): Int32 \ IO =

becomes:

def printAndInc(x: Int32): Int32 : IO =

Or maybe, since functions need something after the \, even for pure functions, we just drop the \ and use the last argument?

def printAndInc(x: Int32): Int32 IO =


Why not something human readable? pure vs mut?

Other languages already have readable keywords in the function definition; extends, raises, where, having, Optional, and so on. They don’t feel unnecessarily verbose.


Because they’re working on algebraic effects so obviously IO won’t be the only effect out there. Also because “mut” is even more misleading as it doesn’t capture everything an “IO function” can do, in comparison to a pure function.


”mutates IO”? Anything is better than /{}. I don’t know, maybe one gets accustomed to it after a while? It would be rather verbose to write “returns Foo” everywhere. I’m just looking at this with fresh eyes and this aspect of the language is a token-soup. Other parts look neat though.


And so are forA and forM!


This looks interesting with lots of feature packed. Looks like a promising language.

One thing not clear is what's the reference vs value model, lifetime, and mutable vs immutable model.


Just a friendly UBC piggyback on Waterloo’s programming language ;)

Over summer, I built my own little functional language, Crumb (https://github.com/liam-ilan/crumb). Unlike Flix, the scope is tiny, but some pretty awesome stuff has been done with it. (Checkout this pixel art editor in your terminal, 100% Crumb: https://github.com/ronilan/crumbicon).

There’s a template (https://github.com/liam-ilan/crumb-template) and vscode highlighter (https://github.com/liam-ilan/crumb-vscode) for anyone who wants to mess around with it. Any feedback super appreciated :D


> Unused definitions, type declarations, etc. are compile-time errors.

This is disappointing. One my main grievances with Go is that it's very difficult to prototype code because it won't compile with unused variables. It massively slows down development. It annoyed me so much that I eventually forked the Go compiler and made a version where unused variables are only a warning.


You had me until JVM


It boggles my mind that any developer of a new language wouldn’t use Python’s significant use of white space.

You’re able to fit more code on a screen, and it’s less visual noise. It’s like Pareto better. I really don’t get why others don’t copy it.


Playing around with ML and being forced to use python, I find having no block termination character and selecting which context a line of code is in by how much you un-indent it, is the worst.

Give me braces and rip a formatter across the whole codebase and be done with it.

Furthermore, I think that the trend of compilers/interpreters caring about whitespace formatting rules should really end. You can't ever force everyone in the world to write aesthetically pleasing code in your language and someone out there will always be able to write a complete trainwreck, and everyone disagrees on what is or is not aesthetically pleasing. Leave the problem up to configurable code linters where it belongs. Let people make "mistakes" (in your eyes) if they want to.


Flix seems to be implemented in Scala 2.13. Should we expect Flix to work anywhere Scala 2.13 works? ie: if Scala 2.13 supports JDK 8, should we expect that Flix will not take advantage of later JVM features?


The implementation language does not affect the target "machine". Also Flix is implemented in both Scala and Flix (but GitHub does not yet recognize Flix code). Flix targets Java 11, but is moving to target Java 21, and will take advantage of all features available there, including Project Loom. We are just waiting for Java 21 to become a bit more widespread.

(I am one of the developers of Flix).


This feels like a nice language to write a spreadsheet engine on...


Literally started playing with it yesterday!

Rough edges, but I'm really liking it so far. Let's see how it will handle the upcoming Advent of Code.


Been keeping an eye on Flix. Looks like wonderful language. Wish I could use it at work.


Sounds exteremely amazing


I'm disappointed this is not from Netflix.


Lots of interesting bits in the FAQ: https://flix.dev/faq/

Particularly in the sections titled "What features are not supported" (no exceptions or panics, so e.g. indexing has to return an Option in case it's out of bounds) and "What controversial design choices are made". Some pithy remarks towards the end as well.

To follow HN tradition and find the most controversial topic to discuss, my guess is it's either not allowing name shadowing, or divide by zero equals zero. Or the site not working without javascript (see the section on that near the end, but you'll need to turn on javascript to read it I guess)


I did like seeing the FAQ become more and more deranged the further I scrolled!

I'll perhaps be the first to jump on the 1/0 != 0 hate train though; they mention they designed the stdlib to avoid the partial-function pitfalls of Haskell's, but the article they linked to support their design decision of 1/0 being 0 mentions that it boils down to division being a partial function - x/0 is not a case division can handle. Would it not then be reasonable to make division return an Option?


It would probably be reasonable but a lot of people would complain because they don't want to think about the zero case all the time, so maybe it's also reasonable to not do that.


I also don't like "Unused definitions, type declarations, etc. are compile-time errors" as I often want to test the validity of a statement, like a type declaration, by compiling before using it. Much prefer warnings.

I don't mind the disallowing name shadowing, but I really, really hate (I mean that) websites that are just good ol' text and the odd picture that require Javascript. The excuse of "we used React, it's easy" seems odd given that HTML is much easier.


The situation here is not unlike the problem with null: If you allow null in your language, you will have NullPointerExceptions at runtime. Unused local variables (and other unused constructs) are known to be correlated with bugs (see e.g., Xie and Engler, 2002), so if you allow them, you will allow these bugs. So, by enforcing the absence of unused constructs, a large class of bugs is eliminated.

But I agree, this can be cumbersome. So, Flix allows any unused construct to be prefixed with an underscore to mark it as unused. (But underscored things cannot be referenced.) This seems like a pragmatic trade-off.

(I am one of the developers of Flix).


I still think it is a bad decision, you still have to recursively go back a random depth and a random number of modifications on each level for no benefit (it could be an error in release version, but it is just annoying for development).

What benefit does it have in development-mode?


> Unused definitions, type declarations, etc. are compile-time errors

Any language that does it, is beyond usable (notably go and zig). Like, I would literally fork the compiler before using them with that “feature” on. It’s completely braindead thing to do — like okay, have a separate production release mode and make it an error there. But for quickly testing out stuff, you will inevitably comment something out, which makes a variable unused. Commenting that out will also make something else unused, so you literally have a recursive problem with some random depth, and depending on how deep it goes, you will literally lose all of the context on what you wanted to debug in the first place.

There is literally zero advantage of this idiotic bullshit, my proposed solution of doing it only in prod release gives all the benefits with none of the negatives.


What you describe is exactly what the V compiler does.

It produces warnings for unused variables, and warnings become errors, when you want to compile a production executable with -prod , so you can get both the benefit of prototyping without losing your flow, and you are still required to cleanup after that, when you are finished, before release.


Zig and Go both let you silence the error with `_ = myvar`. It can still be annoying, but it avoids the recursive problem you mentioned. Language design is hard, and it's best to not just assume that people are "idiotic" for not thinking the way you do. Turn the volume down a bit, and maybe find a synonym for "literally" for the sake of variety.


So now you just silenced a warning and transformed it into a semantically correct form you have no way of recognizing from afar, making the original problem 10-fold worse.

It is a brain-dead feature, verbatim if you prefer that word.


You know, I used to think that design was a nuanced grappling of complex tradeoffs, but you convinced me: those you disagree with are just "brain-dead" "idiots". I think there is a relevant quote by Charles Bukowski about confidence, but it escapes me...


There often is nuance. I haven’t seen any positive for this feature, nor here, nor in the relevant github issues for the languages in question, which were quite argumentative.

But feel free to give me the nuance to this design issue that you so miss.


I can't imagine why they would be argumentative with a gentle diplomat like yourself.


I was only a spectator.


I think go made the same silly choice.

Make those warnings in debug/test builds, and errors in prod builds.


> We use JavaScript for the online code editor.


Doesn't mean the whole site needs it though, just one page.


I think the most controversial feature is actually its effect system. Unlike division-by-zero, which is typically a rare occurrence, the effect system permeates the language and must be learned before one can write useable programs. This requires a new and refreshing mindset: I write pure functions, but inside each function, I can use mutable data structures to get the job done!

(I am one of the developers of Flix)


From the website "Flix is inspired by OCaml and Haskell with ideas from Rust and Scala".

Inspirations and ideas from four of the arguably most complex languages of the modern world, all the best with that.


What a sad take…

These languages put forth ideas that are extremely powerful. They might be unconventional from an imperative programming lens, but to call them all “complex” as a justification for discarding them entirely is pretty shallow.


Sorry to be pessimistic, as they always said the fruit does not fall far from the tree, but I'm hoping that Flix can prove me wrong.


In today's world, i think that English is the only programming language that people should focus on.

With the rapid rise of AI, most tasks will soon involve the management of AI models rather than writing code. However, it is still important to have a basic understanding of coding.

Introducing a new programming language at this point seems silly to me.


This reads like a take from a mid-level manager rationalizing that they never learned to code, and convincing themselves that they'll never need to.


I've been a dev for 25 years. I write code every day.


Neat!


Security-critical tasks and programming language research will involve humans coding for the foreseeable future. I don't think you'd like your surgical robot to be programmed in English.


Yes, because English is a cut-put language. So instead of cutting some organ (open) (to treat it), the robot might put it in the dustbin instead, due to a speech recognition bug.

;)

https://news.ycombinator.com/item?id=38157851


Lol. The random bullshit generators can write basic code, or make auto-complete smarter, but good luck making one solve complex problems, solve obscure problems without generating bullshit (aka hallucinations) or reason about large codebases. Random bullshit generators see far too much hype and will never replace programming languages for most things, where determinism is needed.


People had tales about malevolent genies for eternity. Being able to program “in English” doesn’t considerably make the task easier. The actual hard part is dealing with all the edge cases.


I thought that jokes were not allowed here




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: