Hacker News new | past | comments | ask | show | jobs | submit login
Unison Cloud (unison.cloud)
281 points by dcre 11 months ago | hide | past | favorite | 152 comments



It looks like to use this product, I have to learn an entirely new programming language (which appears to be a weird mashup of Python and Haskell), a whole set of entirely new API's, I can only host my stuff on their for-pay cloud infrastructure, and I can't use source control?

That's a lot of very high hurdles to clear. Even if this magically solved all my scaling and distributed system problems forever, I'm not sure it'd be worth it. Good luck to them though for being ambitious.


It definitely is ambitious! A multi-year effort.

This post https://www.unison.cloud/our-approach/ talks more about why such radical changes were necessary to achieve what we wanted. (In particular check out the "3 requirements of the dream" section, which walks through what the programming language needs to support to be able to do things like "deploy with a function call.")

My general take on "when and where to innovate" is: if you can get a 10x or more improvement in some important dimension by doing things differently, it can absolutely be worth it. This is the philosophy we've applied in developing Unison over the years. I am generally happy to learn something new if I know that I'll be getting something substantial out of it. Of course it can be hard to tell from the outside if the benefits really are worth the changes. I'm not sure what to say about that, other than try it out with something low risk and decide for yourself.

Besides the distributed programming / cloud stuff, I'll give a couple other examples where we gain advantages by doing things differently: by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed. This has saved me countless hours compared to other static languages. And I don't have to give up static typing to get this.

This sort of content-addressed caching also plays out for testing - for pure tests (which are deterministic), Unison has a test result cache keyed by the hash of the test code. This also saves countless hours - imagine never needing to rerun the same tests over and over when nothing's changed! (And having certainty that the cache invalidation is perfect so you don't need to do a "clean build just to be sure")

Also replied here re: self-hosting https://news.ycombinator.com/item?id=39293568


Not trying to pour cold water, but the "3 requirements" post seems to address straw man problems. There are existing solutions to each problem.

1. "Deployment should be like calling a function" isn't that the mantra of serverless? e.g. GCP Cloud Run or AWS Lambda? This is also becoming much more streamlined with server-side WASM e.g. wasmCloud.

2. "Calling services should be easy" this is what protobuf is for; cross-language client libraries that handle transport, de-/serialization, native typing, etc.

3. "typed storage" isn't this basically an ORM? I suppose it's more general since it doesn't have to be relational, but ORM ideas could just as easily be adapted to JSON blob stores using something like protobuf.

Also, storing Unison code in a database, keyed by the hash of that code, sounds a lot like using Bazel with a shared remote cache.

I'm not saying Unison isn't cool, but to win me over I'd need you to compare Unison to all these existing technologies and really spell out what differentiates Unison and what makes it better.


For me besides those 3 it's also "what happens if unison fails to attract the funding it needs and shuts down next month, do I get fucked by the proprietary solution that was made a critical part of my own business?"


You sure are doom and gloom about something that was released today


Well, the key difference is that using all those things together is very quickly going to ensnare you in a big pile of goo. That you can forgo all of that and just write functions without having to build them into Wasm or any other format with any kind of build tool is the difference. That you get typed data storage without running a DB. That there is no “deployment” whatsoever.


Until you screw yourself with vendor lock in on a proprietary language.

It's at least a pile of goo that you can take to other providers or host yourself.


The language is open source. See this reply re: self-hosting https://news.ycombinator.com/item?id=39293568


Proprietary as in you write your code for Unison Cloud, and have to rewrite the infra parts if you decide to self host.

This is why infra is decoupled from code and you need things like "deployments".


You can't self-host Unison?


For 1 and 2 it's far from that. These are not first class supported things in PLs and can't be well hidden by libraries. Maybe an embedded DSL could do it in a language that supports them well, ala Electric Clojure...


No worries!

It is true that tech exists that try to make all 3 of those items easier. YMMV, but having used these technologies myself and now having used Unison + Unison Cloud, all I can say is that the Unison experience is quite different overall.

The details matter. A bicycle and a motorcycle share some common principles but that doesn't mean they're "about the same". The fine details of execution and polish can matter too: Slack is different than IRC, Dropbox was different than the million other backup services. Also, bringing a number of things in a cohesive way can lead to big improvements in the experience when it's done well.

Getting into specifics a bit, I don't think deployment with a function call is well handled by existing technologies, because of the reasons discussed in the post. In the absence of Unison's features, there is inevitably some sort of out-of-band packaging step or "setting up the environment with the right dependencies" as a precondition, instead of calling a function and having it Just Work.

Re: RPC, Unison remote calls can pass around arbitrary values, including functions and values containing functions. This Just Works. There's also no generated code which needs to be somehow integrated into your build and no boilerplate converting from the "wire format" objects to your actual domain objects you want to work with.

My experience with ORMs is they are overly opinionated / magical and provide insufficient control for many projects. So every project I've worked on ends up not using ORMs in favor of a layer of boilerplate for talking to the storage layer... which still can't store functions or even sum types properly! And it's not typechecked, either. Using our cloud's storage, I get to use whatever data structures I want, can write my own easily if needed, and I can store any value at all including functions and again it Just Works. And access is typechecked. It's pretty great!

When you put all these things together in a single cohesive programming environment, with a common type system and language, uniform composition, a set of tools all meant to work well together, you really start to see how different it is! It already feels like a huge step up, and will only keep getting better and better as we build out Unison and our cloud platform.

All that said, I'm kind of doubtful that abstract arguments like this will be convincing. Instead, I'd just try Unison out for a low-risk project and decide for yourself if the details are making a big difference for you.

Hope that is helpful! :) If you do decide to play around with it, feel free to come by the Discord https://unison-lang.org/discord to get help, ask silly questions, etc. We are here to help and it's a nice community.


It is so funny to defend serverless with it's all crappy configuration, slow dev cycle and vendor lock-in.


I think this sums it up: "a lot of the work you end up doing is not programming."

Programmers will happily hire a lawyer or a receptionist, but will code themselves into a fury and invent programming languages to avoid admitting they suck at ops and should hire someone.

Let's just call it what it is: the cloud is ego driven outsourcing. Nobody wants to admit they need an ops person, so they just pay for 1 millionth of an ops person every time someone visit their website.


You are right, but wouldn't it be lovely to have a programming language to reduce our reliance on lawyers? (e.g., some logic language in a civil law system)


Most programs are written in a context where hiring someone is not an option.


So as an end user it's kind of like a more cohesive version of https://deno.com/ for infra, where you buy into a runtime + comes prepacked with DBs (k/v stores), scheduling, and deploy stuff?

> by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed.

Interesting. Whats it like upgrading and managing dependencies in that code? I'd assume it gets more complex when it's not just the Unison system but 3rd party plugins (stuff interacting with the OS or other libs).


Yes, I think Deno's a decent analogue for what we're doing, though the Unison language provides some additional superpowers that we find essential. The https://www.unison.cloud/our-approach/ post has more details on why the language "needs" to change to get certain benefits. (This is not a knock against Deno, btw, I think it's an awesome project!)

> Interesting. Whats it like upgrading and managing dependencies in that code? I'd assume it gets more complex when it's not just the Unison system but 3rd party plugins (stuff interacting with the OS or other libs).

In Unison, there's an all-in-one tool we call the Unison Codebase Manager (UCM) which can typecheck and run your code and talk to the code database (we use SQLite for this). The workflow is that you have your text editor / VS code open, and UCM in another terminal, watching for changes.

So if you want to edit a definition, say, here's the workflow -

1. `edit blah` brings code into a scratch file, pretty-printed. You make your changes and get that compiling.

2. You type `update` in UCM, and it tries to propagate this change throughout your project. If it can, you're done. If it can't (say because you've changed a type signature), UCM puts the minimum set of definitions in your scratch file. You get this compiling, then do `update` again and you're done. It's quite nice! The scratch files are very ephemeral and not the source of truth.

For library dependency upgrades the process is similar: you fetch the new version, then use `upgrade` to say "I want my project to exclusively use the new version". If everything's compatible, you're done. If there's incompatible changes, UCM creates a scratch file with the minimum set of things to get compiling.

One interesting benefit is you can have multiple versions of the same library in use in your project. Unison doesn't care if you do this (though it can get confusing so people tend to consolidate). But there are cases where we've made good use of the ability to reference multiple "incompatible" library versions within a project.


> by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed.

So… ccache?


Absolutetely, but speaking as someone who has tried to get ccache to work in Azure pipelines properly...

I mean, ccache worked. But it wasn't exactly faster. Have to try again with a permanent memcached. Also, it's fiddly with paths, the absolute paths have to be the same, so if you run more than one build agent on a machine, those agent aren't going to cache each other's stuff. The "dropbox = rsync + ftp" meme is pretty beaten up, but maybe it applies here. :-)


It is very experimental as well. An interesting language, but say good by to tried and true tooling since the code exists as records in database, not files.

This could lead some huge advantages, and new obstacles.

I played with the language for about a week and found it intriguing. And it seems to approach tackling Joe Armstrong's question "Why do we need modules at all?" -> https://erlang.org/pipermail/erlang-questions/2011-May/05876...


This is the Joe Armstrong idea I was trying to remember a while back when we were discussing NPM and micro-dependency madness.


The point of the effort is the new language. Hosting and the rest are added on top of it. So yes, if you just want to host some code you shouldn't be rewriting all of it in Unison. But if you are already a user of the language then this is a good framework for you.


++

The odd thing is unison started purely as a language. Now there's a platform.

I'd love to hear some opinions from outside Unison about how they like using this language, tooling and hosting.


The odd thing is unison started purely as a language. Now there's a platform.

I often find the best way to understand complex things is to dig all the way back to when they were being thought up. In this case there's a blog post from 2017 that I still find useful when thinking about Unison:

https://pchiusano.github.io/2017-01-20/why-not-haskell.html

Key quote:

Composability is destroyed at program boundaries, therefore extend these boundaries outward, until all the computational resources of civilization are joined in a single planetary-scale computer

(With the open sourcing of the language I doubt it will be one computer anymore, but it's an interesting window into the original idea)

Personally I find there's a lot to this. It's interesting that we're really, really good at composing code within a program. I can map, filter, loop and do whatever I want to nested data structures with complete type safety to my heart's content. My editor's autocompleting, docs are showing up on hover, it's easy to test, all's well.

But as soon as I want cron involved, and maybe a little state-- this is all wrecked. Also deployment gets more annoying as they talk about a lot.

So I think Unison always had to have a platform to support bringing this stuff into the language, even though they built the language first.

I'd love to hear some opinions from outside Unison about how they like using this language, tooling and hosting.

I'd like to hear this too.

Also, it would be great if there was something like https://eugenkiss.github.io/7guis/ or https://todomvc.com/ for platforms that we could use to compare Unison, AWS, etc etc. Or is there already a 7GUIs for platforms that I don't know about?


I'm not associated with Unison Computing and I haven't yet had a chance to use it for a professional project but learning the language and exploring their approach to tooling has been an absolute thrill.

I actually found out about Unison because from my own side projects I came to the conclusion that strongly typed, hash-addressed functions were a super compelling approach to highly modular and maintainable programming - especially in the context of LLM-generated code because refactorings and new function generation require very limited context - something desirable for humans but especially for LLMs with limited context. After digging around for something that did this I found Unison and have now mostly abandoned my own tooling because Unison is so much more mature and has such competent people behind the wheel.

There is a learning curve for sure, not just with the tooling but also the language. It's a challenging language steeped in advanced software engineering principles, but I would 100% rather spend my time honing my fundamental understanding of my craft rather than learning another 20 AWS tools which are going to go out of style in 12 months. After becoming mildly proficient in Unison I feel like I have such a broader understanding of programming in general even though I've been a full time backend coder for 15+ years.

As for the tooling, it does what it needs to and does it well with very competent folks discussing and debating the minutia daily. It's a small team and that keeps them nimble with major improvements taking place each month.

Today I'd say that it excels at microservices, things you might otherwise consider a traditional serverless function for, but gives you way more agility and brevity to tweak the application in a surgical and controlled way which is more aligned with the behavior rather than text files. Something just feels very right about storing the AST as-is and manipulating it more directly.

Tomorrow, as more supporting libraries get built and interfaces to outside of Unison get developed, anything's possible really - I'm personally certain that we'll see some amount of continued shift towards making ASTs the source of truth so I see learning about it and following the software as an investment in myself and my future capabilities regardless of whether the future software ends up being Unison or something like it. Unison is going out of their way to do all the right things even if it's not always the most practical thing given the current corroded state of web engineering in general, so I'm eager to get in on that as much as possible.


I've been using Unison over Christmas.

I'm not affiliated with the company at all.

I built the start of a very basic site with Unison and HTMX.

https://cross-stitch-alphabet.netlify.app

In my day job I'm a Rails developer. I've been consistently frustrated at how few languages are truly composable and been getting increasingly disillusioned with mainstream languages.

So that's my context.

The not so great:

The language and principles are hard to learn. I've had to throw away what I already know about a lot of programming.

Coding inside ucm requires a very different mentality to how we build software.

The tooling is still early days and has many rough edges.

Performance is currently poor but will get much better shortly.

Abilities are incredible but demand the user to be very familiar with recursion.

Like many on here, I have lots of questions. It's not clear how migrations will work. I don't understand BTrees. If unison corp goes under what happens to my code?

Now for the good.

Unison is, hands down, the most radically joyful language I've ever used.

It's caused me to realise that most tools we use in software are faulty primitive compared with what they could be.

The fact is that even the benefits in the marketing of Unison are a scratching the surface of what's possible in this language.

For example, by spending a few hours I made the basics of an end to end testing library that emulates HTMX with local function calls.

This, if fleshed out, would mean the holy grail for me - fast cacheable end to end tests that do not require a browser to be spun up.

The possibilities are mind boggling.

I was utterly delighted with the deploy in a single function feature, something I'm now never going to be able to go back from having.

And deploying a database with schema in two lines is just jaw dropping.

Every time I use Rails now it's clear how much better our coding experience could be.

By building in Unison you get ports and adapters for free. Never have to wait for a test suite again. No infrastructure as code. No JSON. No yaml. Bliss.

In summary, it's radical. Would I run a production system on it yet? Nope.

Would I watch it keenly until it amasses a bit more momentum? You bet.

I believe whether unison succeeds or fails, this is the future of programming.

Oh and they're a delightful group of people to be around. The discord community has been beyond supportive to me whilst learning the language.


Hello! I'm a user of Unison. Been toying with it off and on since I heard about it maybe spring of last year from a Reddit post on the functionalprogramming sub.

As a somewhat stay-at-home dad I was looking to do something fun with programming, and ideally something where I could make an early impact. About the same time, I read about Tree Sitter Grammars and was looking at Lapce, a Rust IDE in its early stages, and uses TSGs for syntax highlighting.

So I ended up learning everything about the Unison syntax, going so far as to learn me a Haskell for great good to produce the TSG for Unison.

About that time, Unison started opening up early testers of Cloud. I passed because I hadn't really written much code in Unison. I'd just been writing the TSG (in C++ and JS).

But then I picked a project: write an implementation of Philips Hue's bridge API in Unison. In the process, I learned about server-sent events and wrote a library for that and released it on Unison Share, which I think of as a mix of Github and NPM (or Maven, or pypi, etc.). I also wrote a MimeType typings library.

I can't speak to the cloud stuff yet, and when I do use it, I won't have much to compare it to because my ops exposure in my profession (as a stay at home dad, haha) is limited.

That's my background, and here are my thoughts:

First, the Local UI for browsing your code and documentation is hands down the best I've ever seen. Everything you write is browsable there, and has hyperlinks to everything else. It's so money. And there's a `Doc` type as well, so you can write something like

  {{
    The first parameter of {term Internal} represents a count, and the second parameter represents a distance in 1-space.
  }}
  type Foo = Internal Nat Int

Then you can `add` and `Foo` will be stored in the current namespace, but so will `Foo.doc`, which is the content inside `{{ ... }}`. You can then delete this code from your scratch file and never think about it again.

If you browse the Local UI (you type `ui` in your `ucm` instance and it auto-loads in a browser), you can easily view the type, the doc above it, and you can click `Nat` or `Int` to be taken to the definitions of these in the base library, located at `lib.base` (`lib` is like your dependencies, like `node_modules` in JS, e.g.).

Say you later want to add a third type parameter. `edit Foo` and the current definition will be pretty printed to your scratch file. Then you can edit it, and run `update`. Anything relying on this type that can be migrated to the new definition will, and anything that can't automatically be migrated will get dumped to your scratch file for you to update manually. Once you have no more errors in your scratch file, `update` will finish it.

This feels a lot like the process of `git rebase --continue` until everything is consistent. Except here it's the code itself that `ucm` understands, not text data that `git` doesn't understand beyond "this is text with conflicts."

From one `ucm` instance, I can switch between projects. No managing folders on my computer in `/Users/foo/workspace/foo-project`, etc.

Anyway, the long and short is that once I got used to working this way, I immediately wished this existed for TypeScript as well, bc that's what I do so much of my work in. The doc generation is incredible, the source browsing is so good, and the process of updating my code is really slick. A few versions ago, it was less so, but it's been improved since then and now I really like it.

Pushing code is as easy as `push`. You can create releases of your libraries or applications by going to Unison Share, finding your project, and navigating through the simple "cut a release" wizard.

There are even types for License, CopyrightHolder, etc. so metadata about your application can be done in code. For example, the license for my mimeType library is

  LICENSE : License
  LICENSE = License [copyrightHolders.kpg] [Year 2024] mit

The type for `License` is defined as `License [CopyrightHolder] [Year] LicenseType. There are pre-configured license types in the base library, and `mit` is one of them.

I find this to be a nice addition as well, although for many this is something to be ignored. But I like the idea of encapsulating so much of a project in code rather than in things like a `package.json` file that is brittle.

The one other thing I'd like to mention is abilities. It was hard to wrap my head around them at first. I'm really familiar with monadic programming. My coworkers might say I'm too in love with it :) Wrapping my head around abilities was hard at first. They're kind of like...monads, DI, and interfaces all sort of mixed together. But there's essentially two components: the ability, and the ability handler. The ability is like you defining an interface. Any handler needs to know how to handle any of the "requirements" of the ability. For example, you might write an application that communicates with an api for example.com as

  ability Example where
    getAll : '{Example} [Foo]
    get : FooId -> {Example} Foo

You're somewhat defining an interface that handlers need to conform to (i.e., they must have code that handles each of the "ability requirements"). Your handler then essentially converts your abilities to more fundamental ones, or removes them completely. You're generally working your way, in an application, down to only IO and Exception abilities (there's probably some cloud abilities I am not familiar with), which UCM handles natively.

Your handler is like the implementation of an interface.

From there, you can write code using anything in that ability, and so long as some ancestor function call wraps all that in a handler, everything just works. It kind of acts like injecting your handler as a dependency of everything that is a descendant function of the handler.

I don't know if I'm effectively communicating how this works, but it makes sense for me. Those are the analogues I'm familiar with that I used to understand the ability system.

Now that I feel comfortable with it, it's pretty cool!

Edit: My final thoughts is that the language is really nice to use (though there are some things from TypeScript I miss, they're very few, and it's certainly superior to something like Java IME). It's nice the VCS and documentation/code browser is built in. Being able to push to a repo, again built into the ucm program, is convenient. Everything is wrapped up nicely. And the company behind the language is extremely online and repsonsive. I've gotten so much help from them. I wish I could speak to the cloud offerings, but I haven't worked with it yet.


There is source control on https://share.unison-lang.org/


Is a language built around a pretty exciting idea. I'd drop quite a lot to go work in Unison, if only to see what the world looks like through that lens.

Tinkering on the weekends just isn't the same.


Related. Others?

Unison Language - https://news.ycombinator.com/item?id=37500406 - Sept 2023 (1 comment)

Unison Language and Platform Roadmap - https://news.ycombinator.com/item?id=36333409 - June 2023 (23 comments)

A look at Unison: a revolutionary programming language - https://news.ycombinator.com/item?id=34307552 - Jan 2023 (84 comments)

The Unison language – a new approach to Distributed programming - https://news.ycombinator.com/item?id=33638045 - Nov 2022 (113 comments)

Unison Programming Language - https://news.ycombinator.com/item?id=27652677 - June 2021 (131 comments)

Unison: A Content-Addressable Programming Language - https://news.ycombinator.com/item?id=22156370 - Jan 2020 (12 comments)

The Unison language - https://news.ycombinator.com/item?id=22009912 - Jan 2020 (141 comments)

Unison – A statically-typed purely functional language - https://news.ycombinator.com/item?id=20807997 - Aug 2019 (25 comments)

Unison: a next-generation programming platform - https://news.ycombinator.com/item?id=9512955 - May 2015 (128 comments)


It's crazy but it just might work!

Even if I never use this, it's incredibly refreshing to see a genuinely new approach to delivering applications. Some of this feels subtly like Smalltalk.

Some very interesting ideas here: https://www.unison.cloud/our-approach/


I feel the same. My initial reaction is that I like the vision, but I feel there is probably a compromise to build these mechanics into a library for many languages, rather than requiring the DC burden of picking up a new 'better cloud' centric language. Perhaps I just haven't developed an appreciation for language capabilities yet.


This is about the Unison language but I think it's relevant. I was checking out the FAQ for Unison and noticed this:

https://www.unison-lang.org/docs/usage-topics/general-faqs/#...

> Unison does not currently support a Foreign Function Interface, for invoking code written in other languages.

> Your programs can interact with the outside world via the `IO` ability, and this includes interaction via network sockets - so you can interact with code written in other languages if that code can expose a network interface, for example as a web service. We'd like to improve on this position in the future.

So far so good, I guess. They're trying to do something that makes sense in the future.

They then go on to outline how they're apparently going to expose specific parts of what you might want to use, like `GPU` as an ability/effect...? What do I do when I just want to execute some of my C code and don't want to jump through hoops to do so? I get that this maybe falls apart entirely because of the idea of content-addressable functions, but can't an FFI binding be the hash of its name and inputs and output or something?

Maybe I'm misunderstanding their plan with regards to FFI solutions. In the end I'd like to just have something like this with both static and dynamic libraries:

    ModuleName = foreign import c [libmodule.a, otherlib.so]
    ModuleName.OpaqueCoolType a = OpaqueCoolType a
    ModuleName.has : a -> OpaqueCoolType a -> Bool


Hi, one of the Unison creators here. We've held off working on FFI until the JIT compiler[1] is completed since FFI is closely connected to the runtime.

There's some interesting subtleties with FFI in a distributed programming language, so I'll ramble about that here in case it's interesting to you. :)

So, in Unison, all values are serializable, including functions and their dependencies. This is a key superpower that enables a lot of the neat stuff we do. As long as those functions are written in pure Unison, we can easily serialize them, deploy them on the fly, etc. But when we add an FFI, the story changes - the sender may have some C library in their environment, and that library may not exist at the recipient node. The two nodes could be different platforms, one Mac, one Linux, and the library may have been written specifically for Linux, say!

So when we add FFI, we will likely be doing it in a different way than most languages. Functions that use FFI will have this tracked in the their type, using our effect system. You'll be able to use whatever C libraries you want in your local computations, but if you want to sent those values around, you need to be sending them to a place that supports that same set of FFI effects, since the C library and its dependencies can't literally be deployed on the fly in the same way as pure Unison code. In our cloud platform, nodes are typed based on what effects they support and we'll probably add a way to create new node pools that have access to whatever C libraries you want.

In "regular" programming, we're not used to thinking about "the execution environment" as a thing that's represented explicitly within the program. Instead, there's an assumed execution environment (which includes the set of native libraries, etc) and you get runtime errors if you run a program and some of the assumed execution environment is missing (like a shared library, say). For the most part, people have been okay with this, but it's already somewhat of a problem for languages that target the browser and the backend. The language may be statically typed, but now the type system is not tracking some key information - namely, is this a function I can call here (if I'm expecting this code to compile to JS) or is it a function that only can be called for backend code? In a distributed setting with heterogeneous nodes with different capabilities, this problem is even more pronounced, which is why we track this information in the types and plan to do so once we add FFI.

Hope that was interesting!

[1]: https://www.unison-lang.org/blog/jit-announce/ is an early progress report, and I think we're finally shipping something in the next month!


> Hope that was interesting!

It's definitely interesting. I'm happy to hear that you'll definitely be supporting what we can currently do with less rigid languages and their FFI, but that we'll have to adapt to the bounds of the system and its rules to some extent.

Do you have any idea how you'd support `LoadLibrary`/`dlopen` in order to load/reload function pointers live? At some point I imagine a lot of this has to have a "trust me" kind of escape hatch.


Forgive me the heresy, but when I look at this example:

  helloWorld.deploy : '{IO, Exception} ()
  helloWorld.deploy = Cloud.main do 
    h = sketchy underlinedeployHttp !  Environment.default helloWorld
    ServiceName.assign
      (ServiceName.create "hello-world") h
Then how am I not just swapping YAML gobbledygook for Unison gobbledygook here..?

I understand the general idea of expanding the scope of programmable code onto the infrastructure layer, that makes total sense to me. But then, you’re just shifting complexity from the developer to your cloud service, and hide it behind a proprietary platform (which I don’t mind, all the best for your business!). And I don’t really understand how that will make things better, all things considered.

Could someone explain this to me?


Unison is a distributed programming language.

YAML config files and Hadoop/Kafka/Kubernetes/etc are additions for non-distributed programming languages. Unison is a grand simplification of everything into one system so that future programmers don't have to put up with our crap.


Hi there! There's definitely an overhead to learning a new language but by describing your cloud infrastructure with an actual programming languages you reap the benefits of type-safety, testability, and code reuse, etc.

You're spot on with shifting the complexity to the cloud layer; our thought was that it would free up time for dev teams to focus on other layers of their application.

Maybe I'm misinterpreting your question though, I'm happy to annotate that code snippet with a walk-through.


I guess what I'm wondering about is whether that really reduces overall complexity, or just externalises it. Despite all the pitfalls, the current "Cloud" ecosystem is comprised mainly of interoperable Open Source software. If we replace that stack with a proprietary blackbox… does that actually help the ecosystem as a whole? Or will we see similar offerings in other programming languages, until we're back to square one (in that you'll have to learn the ins and outs of every language's "cloud service")?

That's probably a bit too philosophical and nothing a company should have to worry about. Again, I think you're doing great work here, I'm just unsure whether this is the best solution for the overarching problem.


Unison user here - you might be right about having to learn the provider's ins and outs, the same way that if you're on AWS, you need to learn AWS lingo.

But with Unison Cloud, most of the complicated API is written as a library in Unison itself[0]. This `cloud` library is technically optional (and editable), and you can build and share your own abstractions as you like, so you don't have that hard rough edge between AWS APIs and your own code.

[0]: https://share.unison-lang.org/@unison/cloud/code/releases/9....


> you reap the benefits of type-safety, testability, and code reuse, etc.

If you treat infrastructure as a problem that has to be solved by code, obviously you'll run into code specific issues like type safety. In the real world I've never root caused a production issue back to that.


You've never run into issues where unclear, undocumented, misunderstood or inconsistent interfaces were a significant contributing factor?


> like type safety. In the real world I've never root caused a production issue back to that.

Your definition of type safety might not include validation + using the type to carry proof of that validation throughout the life of the program.

For examples see:

https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


Type safety isn't a code-specific issue, it's a feature.

A YAML configuration file missing a required key, or a misspelling, or a disallowed mixing of parameters, are all things that can be solved by type-safety, rather than getting a deploy-time or run-time error.


Quick answer: In YAML I can’t define a reusable function, and sometimes I wish I could, for example.

Whatever gobbledygook you’re writing to specify how your system works, a programming language is better than a markup file.


I haven’t used the service or language, but I imagine if you’re familiar with it, the above snippet feels like Typscript does to me.

From their written materials, it looks like the thing you’re missing is that this is a single source of truth.

For most systems, I change my code, build and push a container, and then have to update the YAML to make sure it addresses the right container, and then push that. Obviously a lot of us have that all automated, but I don’t know anyone for whom that automation is easy and never needs to be escaped.

Again, I’m not a user of the platform, but as a user of what they are innovating on top of I can see what they are going for.


Congrats!

I haven't tried unison, but it sounded like magic when I first heard about it here: https://www.youtube.com/watch?v=Adu75GJ0w1o (Worth a watch - Rúnar is awesome)


Here's some context for those who, like me, never heard about Unison before:

This service is apparently directed at developers using a programming language called Unison ― hence the name: Unison Cloud.

As I understand it, this is akin to launching a service called Python Cloud (by the PSF, as part of the language?) where Python developers can deploy their apps as a function call:

  # app.py

  class App:  
    def get(self, req):  
      name = req.GET.get("name")  
      return f"<p>hello {name}!</p>"

    def deploy(self):  
      from PyCloud import cloud  
      project = cloud.create_project(self)  
      project.deploy()  

  # main.py

  from .app import App

  app = App()  
  url = app.deploy()


Yep, that’s right. Modal (https://modal.com) is actually what you’re describing with Python, but Modal is just the compute layer (Unison is going for the whole cloud), and Modal isn’t doing anything to handle version alignment across service boundaries


I never heard of them before, but they offer is amazing -- even though I don't think I'll be using anytime soon.


I know there's only so many names in the world but this is going to completely ruin searches for the Unison file synchronizer: https://github.com/bcpierce00/unison


I too thought it was some new-fangled cloud storage-sync platform based on Unison sync.


I also assumed that this is related with this sync and I had problem with understanding this project by assuming this is some cloud extension of it


I'm old enough that I thought this was related to the old mainframe/UNIX company from the 90s!


I'm somewhere in the middle and remember it as a Newsnet Client for Mac OS from Panic:

https://blog.panic.com/the-future-of-unison/


A great tool I still use everyday.


I've been following Unison since almost the beginning (back in the structure editor days!). It's a very cool project, https://www.unison.cloud/our-approach/ is a great read, and the Unison language (especially their formulation of effect handlers as "abilities") is very cool.

There are two specific things here that make me reluctant to use Unison Cloud in my own work:

1. It doesn't look like there's any FFI or way to shell out to other tools within Unison Cloud. I understand that this is necessary to provide the desired static guarantees, but the lack of an escape hatch for using pre-existing code makes this a really hard sell.

2. Typed storage is excellent! What are its performance characteristics? In my experience, gaining expressiveness in storage systems often requires trading away performance because being able to store more kinds of values means we have fewer invariants to enable performance optimizations. How do migrations work? I've always found online migrations to be a major pain point, especially because data rapidly becomes very heavy. (At a glance, it looks like storage is key-value with some DIY indexing primitives, and I couldn't find anything about migration.)

The approach article asks "Why is it so complicated, anyway?". My guess would be that:

1. For small projects where you can toss together NextJS and SQLite and throw it onto Hetzner, it really _isn't_ that complicated.

2. For large projects where with large amounts of data, high availability requirements, and very large scale, performance and operability matters a lot, and none of these all-in-one systems have yet to demonstrate good performance and operability at scale.

3. There really is not that much demand for projects between these two sizes.


Nice, a structural editor throwback reference! :-) I'll speak to point 1! We aim to add FFI as a fast follow on to the native compilation work that is underway. The work on the JIT compiler opens the door to FFI so that's on the roadmap soon.


To me, the most interesting aspect is the Unison language, specifically how it does away with code-as-text, and instead uses a structured database, so one is dealing directly with the AST: https://www.unison-lang.org/docs/the-big-idea/

(It is not the only language to do so - see old.reddit.com/r/nosyntax)

I hope this catches on, because parsing strings of text is a monumental waste of complexity.


This, plus abilities[1](what Unison calls algebraic effects), make Unison such a joy to work with, it's a very pleasant experience. You can get straight to the business logic.

I also hope it catches on, but Unison right now really gives a pleasant experience.

[1]: https://www.unison-lang.org/docs/fundamentals/abilities/


"Unison Computing, a Delaware public benefit corp. Our mission: advance what is possible with software and work to make software creation more delightful and accessible to all. "

Never heard of a public benefit corp before, but in looking it up, seems like a cool thing. Wonder if Unison Cloud falls under that also...

"What is a Delaware public benefit corporation? A Delaware public benefit corporation (PBC) is a for-profit corporation intended to produce a public benefit and operate in a responsible and sustainable manner. A PBC must be managed in a way that balances the interests of the stockholders, the company’s key stakeholders, and a specific public benefit that the company commits to in its charter."

https://www.cooleygo.com/faq-delaware-public-benefit-corpora...


The short version is that a regular corporation is required to operate in a way that maximizes shareholder profits; whereas a public benefit corp is allowed to take other factors into consideration too.


I like what you're doing. I was heavily involved in building DigitalOcean and thought often about a lot of what you're working on. Good luck, I think you're doing great work. =)


This looks to me like some real innovation. I am looking forward to digging into this.

Combining a custom language with a custom cloud, seems to be something they have leveraged to make some great strides.


Yup great for business, horrible for users - just get lock in and keep paying. But great business idea.


Is Unison something I can easily self host? I wouldn't want to be vendor locked to Unison Cloud.


Hi, one of the Unison creators here. The Unison language is open source (MIT licensed) and there's an open source library ecosystem (see https://share.unison-lang.org/) like most languages. If you just want to run some Unison code on a VM then that's free and works like any other language. You can do this today (we do this ourselves for the implementation of Unison Cloud!). There's also a local single machine interpreter of the cloud API for easy local testing.

The "real" cloud platform providing the fancy distributed compute and storage fabric, deployment with a function call, etc, isn't open source - selling this product in various forms is how we are sustainable as a business.

If you're at a company and want to deploy "Unison Cloud in a box" on your own infra or in your own VPC then that's something that's doable and I'd love to talk more - feel free to email hello@unison.cloud.

If you're just an individual wanting to do cloud stuff at small scale on your own infra, that's probably harder for us to support right now. I'd recommend just using the free tier or starter tier of our public cloud. Even if we had some sort of free self-hosting option for cloud, there's economies of scale you'd miss out on so it could easily be more expensive anyway!

Not to mention, time is valuable https://www.unison-lang.org/blog/developer-productivity-real... We've built a nicely managed public cloud that eliminates huge swaths of tedious work. If you're happy to pay for that, then it's a good fit. If you prefer to self-host all the things, even for personal-use scale, then I totally understand that but Unison Cloud probably isn't the best fit right now.

Hope that helps!


I would even pay a few bucks (one time, not a subscription) to license this software for use. I would love to use Unison for my personal projects, but I'm not about to get locked-in to something, I'd rather build something myself that was worse than this than get locked in for personal projects. I know it has a free tier now, but in the future that may not exist.


The language is free to use.


Yes but what’s a language without a deployment strategy?


Since this announcement is about Unison Cloud, it might not be clear for people who aren't familiar with the Unison language that you can run Unison programs without Unison Cloud. So much like just about any other language you can put a Unison program in a Docker container, deploy it via AWS Lambda, etc. Unison Cloud is kind of an "easy mode" for scalable and distributed deployment with support for typed durable storage, the option to expose public HTTP/websocket endpoints, etc.

Here is an example of containerizing a Unison program: https://github.com/ceedubs/containerized-unison-program

And here is a library that makes it easy to create an AWS Lambda out of a Unison function: https://share.unison-lang.org/@gfinol/lambda-runtime


Not familiar with its deployment strategy. I assumed it could built into a binary but maybe not.


You can self host, yes, though you’ll probably be inventing a lot of wheels.


OK, maybe this is a silly question, but if a function that is written once always refer to the same hash as when it was written, how do you update functions?

For example, if I have function A that calls function B, in today's languages updating function B would just work (A will call updated function). My understanding is that updated function B (we can call it B'), would not be used.


Unison developer here! Let's say you have some term:

B = "Hello "

and it hashes to #a3yx. Lets say you also have funcion:

A = B ++ "world" which hashes to #c7c1

when we store the A function, we actually store it as:

#c6c7 = #a3yx ++ "world"

Then later if you update the definition of B:

B = "Hello, "

and that function hashes to, #5e2b, when you tell unison to update B from #a3yx => #5e2b, it will look for all of the places that used to reference #a3yx, like your A function, and will see if those functions still typecheck with the replacement. If so, they are automatically updated. If not, we pretty print the definitions that didn't to a file for you to manually fix.


This is a really interesting system, and I'm excited to give this a try!

Without knowing whether the following cases would actually be useful/relevant, I'm curious if these things apply to Unison:

- Is there a way to "pin" a symbol to a specific hash/version so it won't automatically update when the referred function gets a change? I.e. I could write: A = B@ ++ "world" and when I store and retrieve it, it becomes (example syntax): A = B@a3yx ++ "world"

- Is there a way to refer to a function/symbol by name rather than by hash? I.e. a call site which uses B by looking up which hash is currently associated with the name "B", such that if I do two simultaneous renames, B->D and C->B, the code would now refer to the function previously known as C?

- Are there ways in which the way function updates "bubble up" through the codebase (I assume updating a function means updating every function that calls it, recursively) could become a problem? It would seem that changing one little function could have a knock-on effect that requires re-hashing most of the codebase.


What happens if you have two terms that are incidentally equivalent:

  A = "Hello "
  B = "Hello "
  C = A ++ "world"
  D = B ++ "world"
and then you update the definition of A but not B?


At the point when you've written a term which is incidentally equivalent to another, the Unison codebase manager tool tells you that you're adding a term that is identical and lists its name. You can still technically perform the addition if you really want at that point, but most folks don't want two aliases for the same function floating around. If you do end up adding it, updating A would also update B. Think of the function name as metadata and the actual implementation as the identity of the function.


Is the term merely the hash of its contents, or does it also include the module space? If it's just the hash of its contents, how do you deal with functions which have the same implementation now but shouldn't always - e. g.:

    serviceA.requiredHeaders key = Dictionary.of "X-API-KEY" key

    serviceB.apiKeyHeader apiKey = Dictionary.of "X-API-KEY" apiKey

If they hash to the same thing and I update `serviceA.requiredHeaders` (because the vendor changed from `X-API-KEY` to `X-VENDOR-API-KEY`) do I have to know that these are two different services in order to review the change and untangle these two methods or is there a way to mark these as "structurally equivalent but semantically distinct"?


Yes this is unfortunately a problem that comes up with our current system. We had to solve this early on by distinguishing "unique" types form "structural" types. For types it is obvious that you want to make sure that these types generate a unique hash, even if they are the same structure:

type UserName = UserName Text type Password = Password Text

since the entire point in introducing types here is to actually declare them as different from one another.

But for other it might actually be beneficial to recognize that they are the same type, for example:

type Optional a = None | Some a type Maybe a = Nothing | Just a

To allow for both, you can prefix type with either "structural" or "unique" depending on what behavior you want (unique is the default). We have tossed around the idea of also introducing unique terms which would let you terms like yours as unique terms that should be salted, and let the current behavior of "structural"? terms be the default. The reality is that this hasn't been such a big problem that it has made it to the top of our list yet ;)


(Sorry for replying late to this but) that seems kind of inconvenient! And what if I'm using two libraries which happen to export functions with identical ASTs, and one updates? I guess usually meaningful nominal types will ensure that that doesn't happen but it seems like a nightmare to deal with in the event that it does.


for us, A and B are the same Term since they have the same hash, they are just two different alias for that hash, so if you update either, you are effectively updating both.

In fact if you had the function:

double x = x + x

and you were to rewrite it to be:

double addend = addend + addend

Unison would say "this is the same function, no update needed" as it produces the same AST


That's a great question! (I work for Unison, full disclosure.)

The process for upgrading Unison code is that if you have a function A that calls B, and you update B in some way, as long as the change is type-preserving it will automatically be propagated to all the sites in your project where B is called. If the change is not type-preserving (for example, if you added a parameter) the tooling itself will direct you to resolve all the places where B is applied. So as you change code locally, you're continually keeping your codebase in sync; A will always be calling the updated function.

Here's an example from our docs: https://www.unison-lang.org/docs/usage-topics/workflow-how-t...


You update all the callers of B that you can see from your codebase. If `B` is a library then you publish an out-of-band notice that `B` is deprecated and they should switch to `B'`. It's the most extreme form of static linking you can think of (which is really nice for stability, not nice for speed-to-deploy-a-change).


Seems like half the unison team is here in this thread. Congrats, looking forward to try it out!


"Unison Cloud is now generally available"

https://twitter.com/unisonweb/status/1755266140924784738


Does anyone know which parts are open source and which parts are proprietary? Discarding the obvious hosting service.

For example: is the storage layer, mentioned in the landing page, open source?

I'm tempted to use it, even as a hosting service.


The language and all the libraries are open source. The storage layer API is open-source as well though the implementation is proprietary, as is the implementation of other parts of Unison Cloud (like the part that actually hosts your Unison code on AWS).


That's very refreshing and there must be a lot of work behind the scene to properly update services and their dependencies without outages. I'm also curious about how database migrations are dealt with.

I'll keep following that space as they have well identified the problem and its root causes. The CNCF is happy of their "ever-growing global community", reaching 173 projects and over 220,000 contributors. Good for them.


I was hoping this was like rsync.net, but for unison:

https://github.com/bcpierce00/unison

Trademarks are hard.


Unison looks like a neat functional language. Do abilities subsume the role of type classes also (without any "effects")? If so, would there be a loss of efficiency for using pure abilities in this way (since a "handler" needs to interpret the ops)? Sorry I've only had a brief look so I'm not sure this makes sense.

Edit: Shorter: do you have type classes


Short version: no type classes (yet)

Longer version:

Building upon what Quekid5 mentioned, Unison abilities are an implementation of what is referred to as algebraic effects in programming language literature. They represent capabilities like IO, state, exceptions, etc. They aren't really a replacement for type classes, though in some cases you can shoehorn abilities in where you might otherwise use a type class.

For someone coming from a Haskell background, I think that abilities are closer to a replacement for monad transformers. But in my opinion they are much more ergonomic.

Discusson of type classes comes up a lot. Here is a long-standing GitHub issue: https://github.com/unisonweb/unison/issues/502

For what it's worth, I've written Unison quite a lot over the past few years and while I've missed type classes at times, I think that reading unfamiliar code is easier without them. There's no implicit magic; you can see exactly what is being passed into a function. So far I've been happy with a bit more verbosity for the sake of readability.


Abilities are typed effects, effectively. At least, that's how I understand it.

(In a less jokey way: Abilities should be thought of as capabilities, and capabilities are what enable you to do I/O, read from a DB, etc. So they are 'effects' in the Pure FP sense.)

Not sure about the type classes thing.


Unison doesn't have type classes yet, no


Apologies if this is a dumb question, and maybe it applies to other services I may already use, but:

What if Unison language and the Cloud have a vulnerability. Would you suddenly have a large network of services that could be readily exploited? This sort of "infrastructure monoculture" situation gives me the heebie-jeebies, but I'm admittedly pretty dumb when it comes to security.

I imagine that with my Google Cloud Run instance running who-knows-what language on maybe some framework, the opportunities to zero in on an exploit are fewer than when an attacker can know exactly what's running, what it's running on, and perhaps even have the ability to enumerate to find instances off of pro plans since they may often use auto-generated names from hyphen-separated common words. If the sandboxing of instances is poor, it seems like it could be a huge problem.

Again, I have no idea what I'm talking about. Mostly curious to learn more, not criticize.


This problem exists with all shared code. Shared code, shared fate.

You hope that with more eyes on a library, it'll be more secure. It may also mean vulnerabilities have a higher blast radius. You have to choose where you want to lie on the balance.

Spectre and Log4J - problems on widely deployed tech (Intel CPUs, Java services).


Very good point at the end; the lower level you go, the broader (and likely more severe) your vulnerabilities will be. I suppose in this case my concerns aren't warranted, or at least not worth placing before more immediate and obvious concerns. i.e., I could write shitty code a lot more easily than Unison could compromise their language and infrastructure.


Whoa, I just started building something similar for scrapscript!

Curious: Who is the target demographic for this service? Hobbyists?

I'm also interested in what benefits unison touts specifically. Is the only reason to choose unison over Cloudflare's workers the conveniences of functional typing and other ergonomics?


This may be more of a comp. sci. answer than you're looking for, but the thing I find most unique is the content-addressed functional language underlying it. Content-addressed meaning that definitions can be identified with "a hash of the AST", which is used as the building block for distributed programming: https://www.unison-lang.org/docs/the-big-idea/

I see the unison.cloud service itself as more of a first large-scale demo of what you can build on that idea (which will hopefully also fund it). But the underlying language is open source and I think could have legs in a bunch of possible applications.


Thanks! Yeah, my language is content-addressable too, which is why I'm interested in how it's being used in the wild.

[1] https://scrapscript.org

I'm excited to see how it progresses beyond "first large-scale demo"!


In a weird way, it's what JDSL wanted to be https://thedailywtf.com/articles/the-inner-json-effect


I need a full liter of unsee juice to recover after reading this.


Joe Armstrong can finally be happy :D


Ha, indeed. For context[0]. That's stuck with me ever since I read it, and I immediately saw that Unison kind of ran with that idea, when I came across it many years ago. I've always been interested and following on the sidelines, but haven't tried it out yet. Maybe now is the time.

[0] https://erlang.org/pipermail/erlang-questions/2011-May/05876...


On their website, they explain that they attempt to lift the idea of a programming language describing computation within a single process to computation across services, made of different processes, connected e.g. via the internet. Which sounds incredibly ambitious, but might be where we're headed.


Unison is an actual language with many cutting edge features.

Im fairly sure Unison was created for a mix of practical and theoretical concerns, especially the intersection thereof.

My gut tells me Cloudflare workers are some type of low-code solution.


> My gut tells me Cloudflare workers are some type of low-code solution.

Cloudflare Workers hosts servers written in JavaScript / Wasm.


I wonder if Unison Cloud is built with the Unison language, that would prove language maturity in my eyes.


Yes, it is! Of course there's other technologies involved but the core services for our compute fabric and storage layer are pure Unison[1]. A number of open source libraries in the Unison ecosystem were developed via us eating our own dogfood while developing Unison Cloud. Same with evolution of the language and tooling, which we've continued improving.

We have a few things still in Haskell which we'll probably move into Unison eventually.

[1]: Just to clarify, our storage layer wraps DynamoDB in an interesting way to provide the transactional API we wanted - we didn't literally implement our own cloud database on top of just the file system and some VMs. :)


Distributed function call isn't that simple. You will soon have various issues like authentication, authorization, compatibility between versions, throttling, retry (e.g. transient error), and so on :-/ The list is so long we could probably write a book about these


Many of those are already handled.


Or like a whole language to handle them


Ah I thought this was related to the trusty old Unison two-way file synchronisation tool and my heart skipped a beat. https://github.com/bcpierce00/unison/


I tried it a couple of times, but the showstopper for me was the hosting model was the show stopper. I think the premise of the language and platform has so many amazing aspects that are very exciting to me. But it also i had a hard time imagining pitching this together with unison cloud to my team. All the components together at this moment in time. A very big investment into an ecosystem.

Hope some smart developers would take the learning and try to integrate it with existing languages like clojure.

Still going to follow what unison is doing because it is so exciting


I don't "get" Unison the language, could someone explain how it's different than other languages? Is it something like tRPC combined with serverless, but at a language level?


I'll give it a shot; I think it's helpful to separate the Unison programming language from the Unison Cloud platform, as they're distinct things even though the features of one (the language) enable and are integrated with the other (the Cloud, which operationalizes the language for web apps and other cloud compute jobs).

Unison's core difference is that your code is not stored as regular text files; instead your functions and types are stored as a hash of their AST. This enables nice dependency management workflow and makes things like renaming functions trivial. The thing to remember is that your functions are programmatically tracked. Based on that core difference we built a platform that can deploy those arbitrary hashes (and all their dependencies) to different locations in a cluster, and we created a Unison library so that folks can describe how their code should be shipped across cloud computing resources. So you have Unison code describing and orchestrating Unison services.


Tbh this is not the worst idea I've seen. I hope you can make it, because I would have a hard time justifying such an experimental platform and idea as a foundation for a company.


I feel like my alter ego wrote this same comment a year ago, but if there's any Nix users that are also Unison users, I'd love to hear your thoughts.

This feels like some of the tough-but-oh-so-good nature of Nix, applied to general programming. But I haven't had a chance to try Unison much.


I use both Unison and Nix (and I work on Unison Cloud).

The "oh-so-good" aspect that comes from content-addressed dependencies is definitely there. I've spent a lot of time debugging runtime issues on the JVM because two libraries that I depend on disagree on what version of a common dependency should be on my classpath. This is not something you ever experience with Unison. In the runtime every term and type are identified by their hashes, so there's no (realistic) way that names can collide.

Otherwise, Unison and Nix feel pretty different to me. Nix is generally a build-time language for arbitrary runtimes, while Unison is a general purpose language for a specific runtime.

Nix takes on the really ambitious goal of wrangling ancient projects built with Makefiles and ambient environments into deterministic builds. Through the heroic effort of derivation authors, they've managed to make it work. But it requires those maintainers to do lots of careful manual tracking of dependencies, pre-build source patches, overriding build steps, etc.

Unison takes a much more constrained approach: if we start with a language that is content-addressed at its core and keep running with this idea, where do we end up? One nice outcome of this is that you never need to manually track dependency versions, hashes, etc; the language does that for you.

The "tough" part is also there, but feels different. To me the Nix expression language is straightforward, but I find it difficult to wrap my head around nontrivial derivations. To answer questions like "what attributes and build steps can/should I override" I feel like I have to dig through the layers of the implementation. In Unison a powerful static type system and UIs (both local and Unison Share) that support clicking through to any term/type make it easier for me to digest code. The "tough" parts of Unison generally stem from the young ecosystem: fewer existing libraries, a codebase manager that is under active development and not nearly as stable as git, etc.

If nothing else Unison is worth a try just because it is so different than most other languages/ecosystems.

PS if you are interested in usin Nix to install the Unison codebase manager or to package a program written in Unison these repos might be useful (disclaimer I'm ceedubs):

https://github.com/ceedubs/unison-nix/ https://github.com/ceedubs/unison-nix-snake/


Don't have much to say other than I really love and appreciate this answer. This confirms my suspicions that I'm going to like Unison a lot. I get your point, but embracing the "try to imagine a better world, even if it does mean repackaging every piece of Linux software" part of Nix makes it easy for me to see that while Unison is different, the re-imagining gets you things you didn't even know you wanted.


What is the main advantage of this over the classic big cloud function offerings?

Sounds like developer experience possibly?


The developer experience would be one of the main advantages. Unison cloud was custom built to run Unison's language without extra steps like building packages, syncing dependencies across nodes, etc. Also, interactions with other cloud platforms typically aren't described in a programming language which is shared between the infrastructure management layer and the application layer. The draw-back is that the Cloud runs the Unison programming language specifically.


Is this 21st century COBOL? Some language with some open-spec, made actually to be running in some very closed environment via interactive environment, with no source control?


I'm excited about Unison as a language so I'm glad to see something that will hopefully give it some additional forward momentum


Exciting! I was wondering when this would be released given the website said (until a few days ago it seems) it would be ready in Dec 2023


Unison friends: Is there an rclone handler/recipe extant, or in the works, that would facilitate inter-cloud data transfer ?

Asking for a friend ...


There is not, at present, an rclone library, but this would be a welcome addition to our open source cloud libraries. Perhaps your friend would like to write such a utility! :-)


Pretty cool stuff. Anyone ever make a babel plugin that would check for identical function definitions?


I've been wondering the same! Haven't really had the time to dig into stable content addressing (and I assume the loose semantics of something like JavaScript would make that exceedingly hard).


Maybe? At the AST level, it can might be complicated I guess, but not really. At runtime JIT though… yeah sure. The various expressions of the same AST are bountiful.

But I would love to run an analysis on every npm module published, and find the same AST subexpressions, functions, etc. Do the same thing: remove the identifiers and hash the AST parts. Even go back and see how people named the same function in different ways!


Has anyone used Unison instead of Spark? Seems like a natural fit to me.


We recently did a series of blog posts exploring how our remote programming model makes us a good fit for writing distributed map - reduce like programs.

https://www.unison-lang.org/articles/distributed-datasets/

One of the real strengths of individual parts of your program being content addressed, and our ability system that lets us track side effects, we can have a programming model where you only need to talk abstractly about where your data is, and how you'd like to operate on it, and then we have the ability to have our cluster gossip about parts of your program that need to be shipped to where parts of your data is. One node can ask another node "please apply this function to the data you have" and the other node can gossip to get any missing definitions it needs for that function.

You never have to talk about serialization or network connections or software distribution, we know how to move data for you, move code for you, and in some cases even cash partial program results.


We wrote an article speaking to this use case here: https://www.unison-lang.org/articles/distributed-datasets/ It's a bit of a deep dive into some of the building blocks of the Remote ecosystem - so it talks quite a bit about how you'd implement something like Spark itself in Unison, but you can see how running data aggregations would work on the Cloud.


Looks really cool. I hope they wrap their language with a variant of python or typecript or something, as I think a lot of the former programmers would embrace this type of deployment model.

I'm just not sure they'd want to learn haskell style syntax.


Hi, one of the Unison creators here. We've talked about adding pluggable syntax[1]. It's in principle straightforward (the code is already stored in a database as its abstract syntax tree, not text) and I imagine a future version of Unison could let you pick from a variety of syntaxes. But we haven't gotten to it yet.

[1] https://github.com/unisonweb/unison/issues/499

... that said, the language semantics and libraries are still going to be different, so even if we have a python-ish or typescript-y syntax, there'll still be new things to learn. :)


I think you guys are a self-selected bunch of very smart people, also unafraid to break ground and try something new - so you may underestimate how much a familiar syntax means to us plebs.

For instance, Erlang always intrigued me, but Elixir (familiar syntax but with Erlang semantics for those who don't know) was what made me really consider the Erlang ecosystem.


Thanks for this feedback. We will get there!


I was thinking the UK Union unison when I saw the title


Can Unison code be stored in a Git repo as text?


We used to support git backed codebase hosting a while ago, before we launched our own remote hosting platform https://share.unison-lang.org/ and it had several downsides.

1. Unison terms are stored as hashes so checking in a binary file wasn't very ergonomic and didn't really enable much in terms of collaboration. If we store our code as text on the file system, we have less information than what's tracked in the Unison tooling, since the plain text version isn't aware of its dependencies. 2. Unison's versioning system is more syntactically aware than Git's since its granularity is based on the definition of your functions and types, not incidental changes like whitespace or newlines.

You can of course, bring all the Unison code for a program into a text file (you write Unison code in your regular editor) and then check it in, but that's not as nice of a workflow than the one that's supported directly.


It might be tough for Unison to be an island outside of GitHub, although I see the benefits.


Totally fair, there are definitely tradeoffs.


I strongly dislike marketing material that tries to normalize gross incompetency around complexity problems that don't actually exist.


If you gave examples of what you meant, others might be able to understand and discuss what you mean.


I have been involved in so many projects where AWS, Google Cloud, OpenStack, or Kubernetes (on metal) was used. I'd have to think a bit hard on how to articulate this well, but I see the page linked on this HN thread and it just screams "utter horse shit" to me.

Someone who finds AWS so hard to use is either not reading documentation or not comprehending it and probably shouldn't be coding in the first place.


I’ve been waiting for this since the first talk about Unison. Love the huge progress you have made and congrats to the public release of Unison Cloud!

Question: Is it possible to set the region of the database/storage? Or is there any timeframe when this can be configured? For GDPR reasons I cannot use hosting that doesn’t support storage in the EU.


winglang is a lot more interesting.


Looks like just another serverless offering



In that they both start with 'u', yes.


...no




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: