Hacker News new | past | comments | ask | show | jobs | submit login
The Unison language – a new approach to Distributed programming (unison-lang.org)
280 points by guytv on Nov 17, 2022 | hide | past | favorite | 113 comments



I think they are making a mistake that's common in this sort of project: trying too many new things at once!

They already have a very innovative way of managing source code, with a database of definitions that keeps the hash of the syntax tree instead of actual source. That's a very neat idea that solves many problems (read their docs to understand why).

But instead of developing that well enough so that it works with source control tools, IDEs, can be deployed easily and painlessly on existing infrastructure... no, they decided to ALSO solve distributed computing, a really, really complex space with a pretty crowded space of solutions... and seem to be focusing on that now instead of the "original" ideas. Looks like a huge issue with scope creep to me... unless they are kind of pivoting to distributed computing now only because the original ideas were not attractive enough for people to embrace it, but I have not heard of anything like that, everyone seems to be pretty vibed by those things.


The two areas of managing source code and distributed computing are not as disjunct as you make them in the context of Unison. They follow from the underlying principle of addressing functions not by their name but by a hash of their normalized syntax tree (ie their code).

There are a bunch of cool implications for distributed computing, namely that you can easily distribute fine grained parts of your application across servers and that you can cache the result of expensive calculations.


> They follow from the underlying principle of addressing functions not by their name but by a hash of their normalized syntax tree (ie their code).

The first time the language or compiler changes such that the same code generates a different syntax tree they'd have to do something pretty fancy to avoid rebuilding the world. (That, plus all the usual caveats about what happens when old hash algorithms meet malicious actors from the future.)


(Unison Developer here), yes we've done two such migrations this year. It has typically meant that we have an "are you ready to upgrade" message when you start up and the migration took less than a minute on my pretty large codebase. It's not a big deal


It's a solved problem technically, certainly, but assuming all of the relevant source code will always be available ignores some social and legal issues.


The relevant sourcecode is not available, we don't store any source code. All the relevant dependences MUST be availble in AST form though.

I don't know what the social and legal issues might possibly be, though I might be missing somehting, what do you have in mind there?


One issue I was thinking of was companies that distribute code in binary form only, not ASTs or anything which could be used to steal their thoughts. The other issue was in reverse, however: A binary-only package is unmaintained and lists as dependencies hashes that no longer exist because they're the hashed versions of ASTs that, for one reason or another, the compiler won't generate, even if the source still exists. Versioning and archiving would help this case, at least.


It's a common approach in language design to require this though. Rust has made very similar design choices due to its (current) lack of stable ABIs. IIRC some hash of the compiler version is included in built libraries and prevents accidental linking. You need to rebuild the world for every toolchain upgrade.


Also if you change one very low level function (maybe something in the runtime, Unicode handling etc.) you'd also have to recompile the world. In some ways it's nice to reference things by a name, and let the implementation change without needing to care about the details. It's semver's raison d'etre


To be fair, for a while they were ALSO working on their own graphical source editor that allowed for type-correct transformations and assisted refactorings. They put that on the back burner specifically because they are trying to focus on fewer things :)

I think the distributed computing problem is pretty related once you have "content-addressable" source code. Agreed that it's a lot of work but I hope it pans out!


I disagree, there's no real relationship between 'content addressable' source code and distributed computing.

Also I don't think that you need to create a new language to have 'content addressable' source code distribution..

Creating yet another language ensure that this will get nowhere, too bad.


Well, there is a relationship. The relationship is specifically that Unison nodes can communicate code with one another unambiguously by exchanging hashes.


Would this not work just as well with a lisp or even JS?


(I'm the a Unison employee that works mostly on distributed computing)

There are a few things about unison that make this easier:

* content addressed code * unison can serialize any closure

I can ask for a serialized version of any closure, and get back something that is portable to another runtime. So in a function, I can create a lambda that closes over some local variables in my function, ask the runtime for the code for this closure and send it to a remote node for execution. It won't be a serialized version of my entire program and all of its dependencies, it will be a hash of a tree, which is a tree of other hashes.

The remote node can inspect the tree and ask for or gossip to find out the definitions for whatever hashes in that tree it doesn't already know about. It can inspect the node for any forbidden hashes (for example, you can'd do arbitrary IO), then the remote node can evaluate the closure, and return a result.

In other langauges, perhaps you can dynmically ship code around to be dynamically exeecuted, but you aren't going to also get "and of course ship all the transitive dependencies of this closure as needed" as easily as we are able to.


In a simple Lisp machine, such as something resembling PicoLisp, I can't see why not - iff instead of car and cdr being just linear addresses in a memory, have them itself be hashes.

Since everything is made up from car and cdr, it's easy going from there. There is no difference between running locally, or anywhere. Just look up the data by request or gossip, as you said.

(For performance reasons, one might want to let a cons which represents "source" smaller than the size of a hash, be a literal representation instead of hashed. No need to do a lookup from a hash when you can have the code/data in the cons itself. Analoguous, don't zip a file when the resulting zip would be larger.)


I'm not totally up to date on lisp and don't know anything about PicoLisp, so forgive me if there is stuff I'm missing :) but lemme try:

Lets say you wrote a imaginary program to sum a column in a csv:

(defun my-program () (let* ((raw-data) (s3-load-file "htpps://...")) ((parsed) (csv-parse raw-data)) ((column1) (csv-column parsed 1)) (mean column1))

In this pretend program we are using some 3rd party s3 library to fetch some data, some other 3rd party CSV library to extract the data. Now I want to run this on some other remote node. I know I can just sent that sexp to the remote node and have it eval it, but that is only going to work if the right versions of the S3 and CSV library are already in that runtime.

I want to be able to write a function like:

(defun remote-run (prog) (....))

That can take ANY program and ship it off to some remote node and have that remote node calculate the result and ship the answer back. I don't know of some way in lisp you could ask the runtime to give you a program which includes your calculation and the s3 functions and the csv functions.

In unison, I can just say `(seialize my-program)` and get back a byte array which represents exactly the functions we need to evaluate this closure. When the remote site tries to load those bytes into their runtime, it will either succeed or fail with "more information needed" with a list of additional hashes we need code for, and the two runtimes can recurse through this until the remote side has everything needed to load that value into the runtime so that it can be evaluated to a result.

Then, of course, this opens us up to the ability for us to be smart in places and say "before you try running this, check the cache to see if anyone ran this closure recently and already knows the answer"


In lisp there are symbols and lists. The symbols could be pointers to content addressable hashes, or in-lined content. So in the namespace you could require the library by its content addressable hash, and give it a symbol definition, or just refer to the hash itself.


I'm not saying there exists a Lisp with a turn-key distributed cloud runtime. Like the sibling answer, I'm saying, it's not super complicated. Instead of loading an .so file the regular way, load it via hash.

The car/cdr nature of Lisp makes it almost uniquely suited to distributed runtime IMHO.

As for your last sentence, this touches on compilation. Solve this, and you solve also dependency detection and compilation. I feel there are so many things which could/should converge at some point in the future. IDE / version control, distributed compute, storage.

An AST (or source) program could have a hash, which is corresponding in a cache to a compiled or JIT-ed version of that compilation unit, and various eval results of that unit etc etc. So many things collapse into one once your started to treat everything as key/value.


Is Unsion committed to pure functions in order to support this?


Yes, Unison is a purely functional language.


Well, it could be done in JS or Lisp. You'd have to replace all the references to dependencies in every function with a hash of the implementation of the referenced function and use some kind of global lookup table (which would need to be a distributed hash table or something). But this would be slow, so you'd need a compiler or some kind of processor to inline stuff and do CPS transforms, and by that time you've basically implemented a janky version of Unison.


Or a beautiful, conceptually simple version of Unison. In any case, not more convoluted than for instance Javascript V8 already is to get the performance it has.


The key idea is that Unison code is in a sort of "normal form", such that e.g. what names you give things doesn't matter at all.


Yes it does work with JavaScript. At yazz that is exactly how we store is code, by the iPfs hash


or Erlang.


How about wasm? Interop de lux


An issue with exchanging WASM is you'll have to do either dynamic or static linking of dependencies. Unison works around this by having all definitions share a global address space (and the address of any given code in that space is the hash of its syntax tree), so there is no linking step. Or rather, the linking is done trivially by the hashing.


> Creating yet another language ensure that this will get nowhere, too bad.

What has happened before, consistently, is that research or proof of concept- style languages pave the way for bigger players to take the ideas and incorporate them into existing or future mainstream languages.


My impression is that the original use case was distributed computing, and the content-addressable stuff took on a life of its own after that.


This is completely correct. Unison was always a distributed computing project, even when it was just an idea.


I thought that was sort of Paul's goal, to explore a bunch of new ideas and paradigms. I doubt he's under any illusions that this language, qua this language, is going to see wide adoption or need to have all of its features really ironed out with a fixed/stable API. But I could be wrong!


Then that flips it back to being very cool that he is putting his energy into carrying a few batons far enough for others to pick them up.


Agreed, I think it's great to have really cutting-edge, highly experimental sort of research languages that explore new paradigms. Doesn't mean you have to like the language, but I think it's good that some people are doing this kind of work as a going concern and not just some repo that was pushed up once somewhere and languishes. He's got some interesting ideas in there related to effects system, etc.


> carrying a few batons far enough for others to pick them up.

I love that metaphor btw. Gonna use that.


> I doubt he's under any illusions that this language, qua this language, is going to see wide adoption or need to have all of its features really ironed out with a fixed/stable API. But I could be wrong!

They have VC funding and employees. I presume he's told investors that it will see at least fairly wide adoption!


Ha! Did not know that. Not necessarily dispositive though. Also, I believe they are a public benefit corporation so that would also cut in favor of my point...I think


You have it backwards. The source code management (as a marketable feature) came at least one year and maybe two years after the distributed computing parts. The earliest Unison demos (circa 2016) were "build a distributed google crawler in just a few lines of code". I think the hashes were always a component, but not part of the real messaging for quite awhile.


Sounds like `Opa!`, which was a language plus a meteor-like framework. The language itself was really nice with some interesting type-level features, but the built-in framework was the selling point. But all that ultimately got in the way for doing anything that wasn't built into it, which ended up being pretty much anything beyond toy prototypes. The language got dragged down because of it.

That said, it came out around the same time CoffeeScript did, and nobody uses CoffeeScript anymore either. So probably the fate was inevitable regardless of whether the framework was included in the language or not.


If we want to set sail on the ocean, we do not take a basket and start work to stop it from leaking. We build a boat.


> I think they are making a mistake that’s common in this sort of project: trying too many new things at once!

The one thing identified as “the big idea” of Unison seems to me to be conceived as a solution to a distributed computing problem that incidentally also solves a number of problems that are issues outside of distributed computing (but also within distributed computing, such that fleshing out how it can solve them enhances unison as distributed computing solutions as well as providing side benefits.)


> read their docs to understand why

hard pass. but thanks for the suggestion. I'll start with the home page and the examples and most likely end there.


The reason for the source code structure is because of what it enables for distributed computing.

It makes it simple to transparently ship the code (not just data) around to any worker node. The point of Unison Cloud is to disappear the difference between AWS EC2 and Lambda.


That is the perfect programming language landing page.

The Hello World example introduces one of their concepts that you might not see every day, then they show a little algorithm, then a practical "stuff you need to get work done" example.

I got an immediate sense that the language has some familiar "ML family" type features (like F# or Scala) but also some distinctive aspects.


Oops I posted something similar to this. Agree completely.


This project hashing of sourcecode (or AST) in the interpreter is a really powerful idea

I plan to use it for a slightly different purpose. I want to implement an interpreter that is multithreaded similar to Java or Erlang that can send objects between shared memory without marshalling or copying.

I had a talk with someone on HN https://news.ycombinator.com/item?id=32907523 about python's Global Interpreter Lock and we talked about how objects are marshalled between subinterpreters due to object identity. The identity of an object is defined at creation time as the hash of that object.

If the hash of the object was the sourcecode, two interpreters could load the same Object hierarchy and send data by hash reference.

I wrote a multithreaded interpreter that uses message passing to send integers and program counters to jump to code in other threads

This is at https://GitHub.com/samsquire/multiversion-concurrency-contro...


Iirc pony langs messages could work like that.



Thanks! Macroexpanded:

Unison Programming Language - https://news.ycombinator.com/item?id=27652677 - June 2021 (131 comments)

Unison: A Content-Addressable Programming Language - https://news.ycombinator.com/item?id=22156370 - Jan 2020 (12 comments)

The Unison language - https://news.ycombinator.com/item?id=22009912 - Jan 2020 (141 comments)

Unison – A statically-typed purely functional language - https://news.ycombinator.com/item?id=20807997 - Aug 2019 (25 comments)

Unison Language March Update - https://news.ycombinator.com/item?id=19528189 - March 2019 (1 comment)

Unison: a next-generation programming platform - https://news.ycombinator.com/item?id=9512955 - May 2015 (128 comments)


And the one from 8 years ago:

Unison: a next-generation programming platform - https://news.ycombinator.com/item?id=9512955 - May 2015 (128 comments)


whoa, the macroexpander (me) must have a bug. Added to the list now. Thanks!


- Function definitions stored in a content-addressed database.

- Dependency management handled the same way the Nix handles it.

- Some kind of object storage system that uses content-addressable structures as the schema.

- Hyperlinked codebase.

- Human-readable function names as (essentially) git tags.

These ideas are all pretty nice. A dedicated IDE for this language would be a lot of fun to work with. The debugging story likewise seems like it will be pretty solid. I'm not sold on the zero config storage layer: basic object retrieval is different than schema prepared for query performance. I'd like to learn more about the concurrency and synchronization story.


I think that content addressability works really well for code, at the bottom layer. But in order to be practical, you typically need:

Naming and resolvers, in order to be human friendly. This isn't easy to get right, but we have a lot of prior art in dependency management systems.

Persistence layers with GC, like cache and db. You're gonna want to fetch and prefetch in ways that are quite advanced. You don't want to be be blocked in a critical section by network fetching the leftpad function.


On a tangential note--do you like this home page as much as I do? It really draws me in as a programmer. I like all the code examples up front, and their mission statement ("A new approach to distributed programming/No more writing encoders and decoders at every network boundary") seems to be quite clear. To me one of the best landing pages I've run across.


This strikes me as a compiler and IDE feature, rather than a reason to have a separate programming language.

What does having a separate language give us as opposed to taking say, Kotlin, and having a compiler that stores the AST/whatever in a database and does all the interesting goodies?

Or is that the end goal, but we're using a basic language to test it out and work out all the quirks before writing a compiler for existing languages?


"Each Unison definition is identified by a hash of its syntax tree." [0]

I remember many years ago, i had the same idea on how to correctly version a dependency.

[0]: https://www.unison-lang.org/learn/the-big-idea/


Does Unison have stack traces? Using the hash of the ast as the only identifier seems like it would lose some useful runtime debugging information.


While hashes are the primary identifier, things also generally have names associated with them. (Unless you deliberately remove the names that is.)


Sounds oddly much like git. Git-commits are identified by hashes.

And git commits "are" (versions of) programs.

So what does Unison have that git does not?


Mostly, it heavily normalizes code, to the point where a name change will not register as a change to the program.


Yes, we have stack traces. You might see some hashes in the middle of the stack traces, but whenever we do have a name, we'll show you the name. So if you are calling into an anonymous clousure, but functions in your call stack will typically have names available


How do Unison abilities handle non-commutative abilities (i.e. abilities where the order in which one applies handlers matters). Does it just assume that abilities are commutative? Or rely on the programmer to make sure that handlers are applied in an order that makes sense?


It is up to the code handling the abilities to decide in which order to handle the abilities (or to handle them all at once).

I can't think of any cases where it would make a difference in which order they were handled, however. Can you?

I think perhaps it might in the case where abilities themselves were able to make requests of other abilities, but that's not something allowed by our type system currently


> I can't think of any cases where it would make a difference in which order they were handled, however. Can you?

Presumably

  someAction : '{Choose, Abort} a
if it's handled by `Choose.toList` and then `Abort.toOptional` in that order you end up with `Optional [a]` whereas if you do in the other order you have `[Optional a]` right?

N.B. the reason this is theoretically important is that `someAction` may be written with the assumption of e.g. certain short-circuiting behavior in mind and the "wrong" order of handlers might cause different short-circuiting behavior. In other words there's no consistent semantic interpretation you can assign to `someAction` even if you establish certain invariants that your abilities and ability handlers individually satisfy, since the global configuration of your ability handler changes what `someAction` means.

I think the jury is still out on whether this is a practical issue for any language that doesn't try to focus too hard on code having formal semantics (which is most real-world languages). I can definitely craft "real-looking" code that would be buggy depending on the order of handlers, but I'm not personally sure how much of a problem that actually would be for people familiar with the issue.


Thanks for the reply :)

> if it's handled by `Choose.toList` and then `Abort.toOptional` in that order you end up with `Optional [a]` whereas if you do in the other order you have `[Optional a]` right?

I mean, not necessarily. Something that handles a `'{Abort} a` doesn't necessarily produce an `Optional a`. It could produce a Boolean, it could produce an Int, whatever. This is really up to the handler. But I still don't see how you produce a "bug" because you can dispatch the requests to abilities in either order. You couldn't, for example, ever produce a (well-typed) situation where a call to abort doesn't abort, or a call to Choose.toList would fail to produce a list. (Perhaps if toList were allowed to also use {Abort} but it is not)


Maybe I'm misunderstanding how abilities work, but I think I can break Exception.bracket with this right?

I think if I pass something that has e.g. `{Abort, Exception, IO}` to `bracket` and then handle Exception before Abort, my Abort handler can break out of `bracket` before the finalizer action can run.

More generally I must always process any ability with a "bracket"-like function last to prevent this from happening right? And if I have multiple abilities that all have "bracket"-like functions they can step on each other's toes?

Even more generally I think any sort of "scoped" function in an ability has this problem.

This theoretically seems scary (imagine you have some big complicated action that does some bracket deep under the covers to e.g. release file handles; if I handle abilities in the wrong order the file handles might not ever be released, even if I have individually reasonable ability-handler pairs that locally don't do anything silly), but I'm not sure practically how often this comes up, and how much "just always handle Exception last" fixes that (i.e. how unlikely it is for any other ability to have a bracket function).


This is true, if you write `bracket` for e.g. `Exception`, then you can break out of the bracket with `abort` if your bracket doesn't handle aborts.

So if you want to be really sure of resource cleanup, you should use something like the `Resource` ability to acquire resources:

https://share.unison-lang.org/@runarorama/code/latest/namesp...


Got it. But you're locked into IO and Exception then and can't use any other abilities right? I guess you could always convert back and forth at the `run` boundary.

Also I would suggest removing Exception.bracket from base then, or at least changing the file handle example for Exception.bracket, since it seems a tad dangerous.


The latter.


I had to dig for the distributed part but it's outlined here: https://www.unison-lang.org/articles/distributed-datasets/


But the only runtime available for this sort of application seems to be the "unison cloud" that is a hosted service.


They plan on making an open source host it yourself eventually.


>Other tools try to recover structure from text; Unison stores code in a database

Just like VisualAge in the last century. Sign me up, that was great stuff.


Smalltalk had a lot of good ideas!


I loved it, except that it seemed to periodically corrupt the source code database which was an absolute nightmare.


Does anyone have a comparison with Darklang/ecosystem and it's goals? Some of the things mentioned sound similar, at a high level.


I really like the idea of Unison, but unfortunately when I went to try it out, it was much slower than even Python. I hope they can make it performant!


In one of our most recent blog posts we talked about the progress we are making on just-in-time native compilation:

https://www.unison-lang.org/blog/jit-announce/

We are expecting to be a monumental speedup for us. We have some promising results so far, but we haven't yet ported all of the runtime.


How does the runtime manage different levels of trust between clients? How does the language grapple with code injection vulnerabilities? If I want to be able to receive data from an untrusted entity, but not code, how do I make sure they're not submitting code to my server to unpack and execute?


You can simply expose an HTTP endpoint that receives the data. You wouldn't want to expose the internode protocol endpoint to the internet.

That said, it wouldn't be as bad as it sounds. Unison is a purely functional language, so if you don't explicitly provide the ability to e.g. do arbitrary I/O, then other nodes will not be able to send you code that does I/O. It will not type-check.


Yes, we additionally have functions builtin to the runtime that let you evaluate a term before you evaluate it to make sure it doesn't call any "forbidden" functions.

So in our cloud runtime, we blacklist EVERY IO function, then we can in our cloud runtime give you back the ability to do, for example, http requests, but not any other network IO. We won't let you open arbitrary files, but we'll provide ephemeral / persistent block storage through some other runtime ability.

This could also be used to do something like blacklist functions with known security vulnerabilities to catch people that aren't applying their patches!


Re the http endpoint, that's well and good, it just requires serialization and deserialization to a different protocol at the endpoint, which the docs led me to believe you wanted to avoid.

There is probably an opportunity here to do interesting work around authenticating and validating computations from remote clients.

A word of caution -- javascript engines routinely get hacked, and once attackers can execute native code in the engine process they can call syscalls and have all of the rights of the underlying process. It may be useful to have some form of sandboxing of the language runtime for clients exposed to the internet or intranet. Additionally, Java & Ruby web servers routinely suffer from code injection when deserializing objects, which it seems like your language may be prone to as well.


Can someone please ELI 5 for me how to get started?

I downloaded and ran it, it created folders NOT where I told it to, and started.... but no command I type seems to result in anything other than an error message.

How can I get it to add 5+5, without using an external editor?


You're not going to just be able to write code into ucm (unison code manager) because it's not a repl. You'll need some sort of text editor to actually write the code it will then add to the codebase.

https://www.unison-lang.org/learn/quickstart/

If this quick start tutorial doesn't work for you then there's a bug on unison's end.


I’m excited for ideas like this to become mainstream. Today’s approaches to heterogeneous and distributed computing are how I imagine single core computing was 40 years ago. You have to manually manage practically everything. Instead, let the compiler or interpreter or whatever figure out where to actually run it (CPU vs ALU vs GPU vs remote machine #42), what to keep in what part of cache (L1 vs L2 vs RAM vs disk vs S3), etc.


We have, though. Spark, for example, does this in just about every language. It's been around for ages, is liberally licensed, and deployed at scale in thousands of enterprises.


Well, Spark does allow you to accomplish distributed workloads for certain forms of computation. But it's limited to those forms of computations (streaming, map-reduce). It also has a large operational footprint. It's also lamentable that distributed code that uses Spark looks nothing like its non-distributed counterpart.

Something very much like Spark map-reduce can be implemented in ~100 lines of Unison code:

https://www.unison-lang.org/articles/distributed-datasets/

Some videos on Unison's capabilities over and above Spark:

Distributed programming overview: https://www.youtube.com/watch?v=ZhoxQGzFhV8

Collaborative data structures (CRDTs): https://www.youtube.com/watch?v=xc4V2WhGMy4

Distributed data types: https://www.youtube.com/watch?v=rOO2gtkoZ3M

Distributed global optimization with genetic algorithms: https://www.youtube.com/watch?v=qNShVqSbQJM


Spark still references functions by name though, right? So if a peer says "call this function" I have to trust both that peer AND whatever system resolves the name to some cpu instructions.

Plus you're significantly limited in how much you can memoize if you can have different versions of the "same" function.


spark is great for distributed computation.. also has about a million config switches and is generally kind of 'bulky'. EMR makes management a lot easier, but you still have to fiddle with num executors, memory, etc. But it has been 'through the wars' and is generally pretty solid on some pretty large data sets. Once you get it conf'd it's pretty good. The best part is just writing the scala code to run the job.. admittedly, it would be great to use something a bit lighter for certain workloads.


Sanjay already had a somewhat similar approach in managing the scheduling of concurrent RPCs, i.e., RPCs are enqueued, and a scheduler looks at the RPCs and figure out windowed batching order to optimize for the overall execution time (here the concurrent RPCs are interdependent).


That sounds like a slightly different problem. Unison looks like any function can transparently cross network boundaries versus the efficient scheduling and dispatching of concurrent rpcs.


An answer to the question "what useful thing has been build with Haskell"?


Here's a couple more that I've found useful over the years:

- pandoc: convert (almost) any document format to (almost) any other document format

- The Elm Compiler: Possibly the most widely used statically typed pure frontend language

- XMonad: A tiling window manager for linux that I enjoyed using even without knowing much Haskell

- Purescript: the other widely-used (for an FP language) statically typed pure compile-to-javascript language


' - is used to denote a delayed computation. This is hard to see / easy to miss. What it this trend to abbreviate everything down to a shortest. What is wrong with delayed keyword for example


(Unison dev here) I hear you! :) `'` is easy to miss so we added the `do` keyword, and at least a few of us want to substitute `delay` as the keyword. Fortunately, we should be able to do that programmatically at the codebase level if the time comes for that, so folks won't have to hunt for their single quotes to change it everywhere.


Nil nove sub sole ? It reeks of COOLs (Concurrent Object Oriented Languages). Here's an example of the mid 90s: https://distrinet.cs.kuleuven.be/projects/CORRELATE/


Perhaps not, but plenty of good ideas kick around for years or decades before someone finally figures out how to put them together right.


Object oriented? Unison looks pretty functional to me. And I don't see the content addressability, which is arguably the most important part.

Anyway, you'll very rarely see completely novel ideas in this space. Good combinations, compromises and applications matters a lot. Look at your favorite applied Merkle tree tool.


> Good combinations, compromises and applications matter a lot

An understatement — I’d say when it comes to language design, this is pretty much the whole game right here.


how is this different from apache beam? also, any orgs using this at scale?


It's a programming language where the file representation isn't text of the code but ASTs and you can call things distributedly by a hash of that AST.

So I guess its different from apache beam in all of the ways.


It smells like Haskell. Perhaps more human-friendly. Nice.


No description on the home page, the documentation, Github page ... mentions that this is a Haskell-like language inspired by Haskell. No credit to Haskell while the code base is 99.6% Haskell!


I LOVE the idea but the language itself is so ugly I don’t want to learn it. It looks really ugly. Sorry.


It's ML syntax. I'm also not used to it, but many who are consider it beautiful.


Its certainly a turn-off for some. One thing that we have as a potential future effort would be to enable an alternate surface syntax. Since we store an AST insteead of source code, we could create a parser/pretty printer for another surface syntax. Then a user of the language who would prefer something that looked like python could switch our website's renderer to output the python like syntax!


What's an example of a pretty language?


I don't think this language is ugly and it is indeed hard to say which language is objectively beautiful, but I personally have always disliked syntax that uses the ' operator for some reason. It's small - it looks like a piece of dirt on the screen, which makes it hard to read. I imagine the same arguments could apply to the '.' or ',' operators, though ' tends to appear around whitespace, while ./, around characters, so there's at least some contextual information around them.


I really hate the language but I have to say Python does at least look quite nice.


Yes it does look nice, but I wonder how you'd do eg pattern matching on ADTs...


Scheme! :)


Good news: the surface syntax of Unison is totally arbitrary and can in practice be swapped out. There's currently only one (Haskell-like) syntax, but in future I imagine there will be others. So you can imagine on https://share.unison-lang.org a little dropdown that lets you select what syntax you want to see the code in.


Perl4


Is this the same language from the how to create a programming language book?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: