Hacker News new | past | comments | ask | show | jobs | submit login

It definitely is ambitious! A multi-year effort.

This post https://www.unison.cloud/our-approach/ talks more about why such radical changes were necessary to achieve what we wanted. (In particular check out the "3 requirements of the dream" section, which walks through what the programming language needs to support to be able to do things like "deploy with a function call.")

My general take on "when and where to innovate" is: if you can get a 10x or more improvement in some important dimension by doing things differently, it can absolutely be worth it. This is the philosophy we've applied in developing Unison over the years. I am generally happy to learn something new if I know that I'll be getting something substantial out of it. Of course it can be hard to tell from the outside if the benefits really are worth the changes. I'm not sure what to say about that, other than try it out with something low risk and decide for yourself.

Besides the distributed programming / cloud stuff, I'll give a couple other examples where we gain advantages by doing things differently: by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed. This has saved me countless hours compared to other static languages. And I don't have to give up static typing to get this.

This sort of content-addressed caching also plays out for testing - for pure tests (which are deterministic), Unison has a test result cache keyed by the hash of the test code. This also saves countless hours - imagine never needing to rerun the same tests over and over when nothing's changed! (And having certainty that the cache invalidation is perfect so you don't need to do a "clean build just to be sure")

Also replied here re: self-hosting https://news.ycombinator.com/item?id=39293568




Not trying to pour cold water, but the "3 requirements" post seems to address straw man problems. There are existing solutions to each problem.

1. "Deployment should be like calling a function" isn't that the mantra of serverless? e.g. GCP Cloud Run or AWS Lambda? This is also becoming much more streamlined with server-side WASM e.g. wasmCloud.

2. "Calling services should be easy" this is what protobuf is for; cross-language client libraries that handle transport, de-/serialization, native typing, etc.

3. "typed storage" isn't this basically an ORM? I suppose it's more general since it doesn't have to be relational, but ORM ideas could just as easily be adapted to JSON blob stores using something like protobuf.

Also, storing Unison code in a database, keyed by the hash of that code, sounds a lot like using Bazel with a shared remote cache.

I'm not saying Unison isn't cool, but to win me over I'd need you to compare Unison to all these existing technologies and really spell out what differentiates Unison and what makes it better.


For me besides those 3 it's also "what happens if unison fails to attract the funding it needs and shuts down next month, do I get fucked by the proprietary solution that was made a critical part of my own business?"


You sure are doom and gloom about something that was released today


Well, the key difference is that using all those things together is very quickly going to ensnare you in a big pile of goo. That you can forgo all of that and just write functions without having to build them into Wasm or any other format with any kind of build tool is the difference. That you get typed data storage without running a DB. That there is no “deployment” whatsoever.


Until you screw yourself with vendor lock in on a proprietary language.

It's at least a pile of goo that you can take to other providers or host yourself.


The language is open source. See this reply re: self-hosting https://news.ycombinator.com/item?id=39293568


Proprietary as in you write your code for Unison Cloud, and have to rewrite the infra parts if you decide to self host.

This is why infra is decoupled from code and you need things like "deployments".


You can't self-host Unison?


For 1 and 2 it's far from that. These are not first class supported things in PLs and can't be well hidden by libraries. Maybe an embedded DSL could do it in a language that supports them well, ala Electric Clojure...


No worries!

It is true that tech exists that try to make all 3 of those items easier. YMMV, but having used these technologies myself and now having used Unison + Unison Cloud, all I can say is that the Unison experience is quite different overall.

The details matter. A bicycle and a motorcycle share some common principles but that doesn't mean they're "about the same". The fine details of execution and polish can matter too: Slack is different than IRC, Dropbox was different than the million other backup services. Also, bringing a number of things in a cohesive way can lead to big improvements in the experience when it's done well.

Getting into specifics a bit, I don't think deployment with a function call is well handled by existing technologies, because of the reasons discussed in the post. In the absence of Unison's features, there is inevitably some sort of out-of-band packaging step or "setting up the environment with the right dependencies" as a precondition, instead of calling a function and having it Just Work.

Re: RPC, Unison remote calls can pass around arbitrary values, including functions and values containing functions. This Just Works. There's also no generated code which needs to be somehow integrated into your build and no boilerplate converting from the "wire format" objects to your actual domain objects you want to work with.

My experience with ORMs is they are overly opinionated / magical and provide insufficient control for many projects. So every project I've worked on ends up not using ORMs in favor of a layer of boilerplate for talking to the storage layer... which still can't store functions or even sum types properly! And it's not typechecked, either. Using our cloud's storage, I get to use whatever data structures I want, can write my own easily if needed, and I can store any value at all including functions and again it Just Works. And access is typechecked. It's pretty great!

When you put all these things together in a single cohesive programming environment, with a common type system and language, uniform composition, a set of tools all meant to work well together, you really start to see how different it is! It already feels like a huge step up, and will only keep getting better and better as we build out Unison and our cloud platform.

All that said, I'm kind of doubtful that abstract arguments like this will be convincing. Instead, I'd just try Unison out for a low-risk project and decide for yourself if the details are making a big difference for you.

Hope that is helpful! :) If you do decide to play around with it, feel free to come by the Discord https://unison-lang.org/discord to get help, ask silly questions, etc. We are here to help and it's a nice community.


It is so funny to defend serverless with it's all crappy configuration, slow dev cycle and vendor lock-in.


I think this sums it up: "a lot of the work you end up doing is not programming."

Programmers will happily hire a lawyer or a receptionist, but will code themselves into a fury and invent programming languages to avoid admitting they suck at ops and should hire someone.

Let's just call it what it is: the cloud is ego driven outsourcing. Nobody wants to admit they need an ops person, so they just pay for 1 millionth of an ops person every time someone visit their website.


You are right, but wouldn't it be lovely to have a programming language to reduce our reliance on lawyers? (e.g., some logic language in a civil law system)


Most programs are written in a context where hiring someone is not an option.


So as an end user it's kind of like a more cohesive version of https://deno.com/ for infra, where you buy into a runtime + comes prepacked with DBs (k/v stores), scheduling, and deploy stuff?

> by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed.

Interesting. Whats it like upgrading and managing dependencies in that code? I'd assume it gets more complex when it's not just the Unison system but 3rd party plugins (stuff interacting with the OS or other libs).


Yes, I think Deno's a decent analogue for what we're doing, though the Unison language provides some additional superpowers that we find essential. The https://www.unison.cloud/our-approach/ post has more details on why the language "needs" to change to get certain benefits. (This is not a knock against Deno, btw, I think it's an awesome project!)

> Interesting. Whats it like upgrading and managing dependencies in that code? I'd assume it gets more complex when it's not just the Unison system but 3rd party plugins (stuff interacting with the OS or other libs).

In Unison, there's an all-in-one tool we call the Unison Codebase Manager (UCM) which can typecheck and run your code and talk to the code database (we use SQLite for this). The workflow is that you have your text editor / VS code open, and UCM in another terminal, watching for changes.

So if you want to edit a definition, say, here's the workflow -

1. `edit blah` brings code into a scratch file, pretty-printed. You make your changes and get that compiling.

2. You type `update` in UCM, and it tries to propagate this change throughout your project. If it can, you're done. If it can't (say because you've changed a type signature), UCM puts the minimum set of definitions in your scratch file. You get this compiling, then do `update` again and you're done. It's quite nice! The scratch files are very ephemeral and not the source of truth.

For library dependency upgrades the process is similar: you fetch the new version, then use `upgrade` to say "I want my project to exclusively use the new version". If everything's compatible, you're done. If there's incompatible changes, UCM creates a scratch file with the minimum set of things to get compiling.

One interesting benefit is you can have multiple versions of the same library in use in your project. Unison doesn't care if you do this (though it can get confusing so people tend to consolidate). But there are cases where we've made good use of the ability to reference multiple "incompatible" library versions within a project.


> by storing Unison code in a database, keyed by the hash of that code, we gain a perfect incremental compilation cache which is shared among all developers of a project. This is an absolutely WILD feature, but it's fantastic and hard to go back once you've experienced it. I am basically never waiting around for my code to compile - once code has been parsed and typechecked once, by anyone, it's not touched again until it's changed.

So… ccache?


Absolutetely, but speaking as someone who has tried to get ccache to work in Azure pipelines properly...

I mean, ccache worked. But it wasn't exactly faster. Have to try again with a permanent memcached. Also, it's fiddly with paths, the absolute paths have to be the same, so if you run more than one build agent on a machine, those agent aren't going to cache each other's stuff. The "dropbox = rsync + ftp" meme is pretty beaten up, but maybe it applies here. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: