Hacker News new | past | comments | ask | show | jobs | submit | oggy's comments login

A funny anecdote from a concert a few years ago: Allen was playing a solo, holding a note and blowing as hard as he could on his little soprano sax. Next thing you his teeth fly out, and there's a general commotion on as the rest of the band goes searching for the denture on the stage.

Amazing to be alive at that age, to be touring and rocking it, that's another level.


Maybe not range per se, but there are a couple of Europe-specific things that make EVs less attractive. First, many people live in apartment buildings and park their cars on the street, with no charging facilities. So they'd have to make that supercharger trip pretty often. Second, on any given Saturday in July and August there will be millions of people driving for vacation (1000+ km not being uncommon), mostly to the Mediterranean or back to their country of origin (Eastern Europe, Balkans, Turkey). In this period I've already ended up queuing for 10-15 minutes for gas - I can imagine it would be worse without chargers. Also, people who take their car to the poorer European countries often do it because of poor public transport infrastructure there, so the likely poorer EV infrastructure there would play somewhat of a role as well (though likely minor).


TLA+ has also had an SMT-based backend, Apalache [1], for a few years now. In general, you encode your system model (which would be the Rust functions for Verus, the TLA model for Apalache) and your desired properties into an SMT formula, and you let the solver have a go at it. The deal is that the SMT language is quite expressive, which makes such encodings... not easy, but not impossible. And after you're done with it, you can leverage all the existing solvers that people have built.

While there is a series of "standard" techniques for encoding particular program languages features into SMT (e.g., handling higher-order functions, which SMT solves don't handle natively), the details of how you encode the model/properties are extremely specific to each formalism, and you need to be very careful to ensure that the encoding is sound. You'd need to go and read the relevant papers to see how this is done.

[1]: https://apalache.informal.systems


"Verifying" and "proving" are synonymous in this case. You prove or verify that the system satisfies some specification, i.e., properties. Your normally write the properties yourself, but sometimes the properties are hardcoded (e.g., your code doesn't panic).

You don't need a particularly strong formal background in CS or maths to verify code, but it's a challenging task never the less.

1. The first challenge where people often give up is that you will often need to reason about your code in a different language than the code itself (not necessarily true for Verus that's being discussed here). In that sense it helps a lot if you already know multiple programming languages, ideally using different paradigms. 2. The second challenge is that you will need to be actually precise in describing what your want your program to. This is a lot harder than it sounds; there's no hand-waving, no "works in practice", but even beyond that, it's actually really difficult to express what you want from the program succinctly, and often not even possible or feasible. On the other hand, this is possibly the most practical skill you will currently be able to take away from the verification tools; it absolutely can change the way how you work, how you collect requirements, and how you go about writing code. 3. The third challenge is then actually performing the verification/proofs. This is currently painfully hard; most literature I've seen comes up with a ration between 5-20x between the amount of code you're verifying (in terms of lines of code, say) and the number of "proof lines" you need to write in some form. This may make sense for extremely critical code where a bug can cost tons of money; e.g., aerospace, or huge companies with tons of users - AWS is doing proofs at a fairly large scale, Apple is now doing the same at a smaller scale too. I would generally recommend NOT writing proofs if you can, but using tools that can do some kind of limited or heuristic verification. Which tools exactly I would recommend for a "working programmer" depend on what you're trying to do. For example, if you are writing concurrent or distributed code, I would recommend using something like TLA and the associated TLC tool. For lower-level Rust code (data structures and similar), I'd recommend the Kani verifier over something like Verus. For many other languages, the choices are limited or non-existent.

Zero-knowledge proofs roughly allow you to convince someone that something is true, without leaving them any wiser WHY it's true. Generally this is not what's interesting for software verification, but it's nevertheless extremely cool (because it's super counterintuitive). Most of the excitement indeed comes from blockchains, where these things are used, but it's debatable whether they're really practical. You can use them to transfer crypto around already today, without revealing the identities of the sender or recipient for example. However they are still computationally quite expensive and only work because there's currently not so much volume in crypto.


Great to see this. I hope it takes off - Bazel is useful but I really like the principled approach behind it (see the Build Systems a la Carte paper), and Neil is scarily good from my experience of working with him so I'd expect that they've come up with something awesome.

One thing I find annoying with all of these general, language-agnostic build systems though is that they break the "citizenship" in the corresponding language. So while you can usually relatively easily build a Rust project that uses crates.io dependencies, or a Python project with PyPi dependencies, it seems hard to make a library built using Bazel/Buck available to non-Bazel/Buck users (i.e., build something available on crates.io or PyPi). Does anyone know of any tools or approaches that can help with that?


Regarding bazel, the rules_python has a py_wheel rule that helps you creating wheels that you can upload to pypi (https://github.com/bazelbuild/rules_python/blob/52e14b78307a...).

If you want to see an approach of bazel to pypi taken a bit to the extreme you can have a look at tensorflow on GitHub to see how they do it. They don't use the above-mentioned building rule because I think their build step is quite complicated (C/C++ stuff, Vida/ROCm support, python bindings, and multiOS support all in one before you can publish to pypi).


I use py_wheel to build packages to be consumed by data scientists in my company. It works well and is reasonably straightforward. Although the packages are pure Python so I haven’t had to deal with native builds.


I have a lot of respect for Neil, but I've been burned by the incompleteness and lack of surrounding ecosystem for his original build system Shake (https://shakebuild.com/). This was in a team where everyone knows Haskell.

I'm cautiously optimistic with this latest work. I'm glad at least this isn't some unsupported personal project but something official from Meta.


I think of Shake as a library for implementing build systems, and was hoping that libraries would emerge that described how to implement rules like C++, and how they should compose together so you can compile C++/Haskell/Python all together happily. A few libraries emerged, but the overall design never emerged. Sorry you got burned :(

Buck2 is at a higher level than Shake - the rules/providers concepts pretty much force you into a pattern of composable rules. The fact that Meta has lots of languages, and that we were able to release those rules, hopefully means it's starting from the point of view of a working ecosystem. Writing those rules took a phenomenal amount of effort from a huge range of experts, so perhaps it was naive that Shake could ever get there on only open source volunteer effort.


The “citizenship” point is really interesting. I’ve found these build systems to be really useful for solving problems in multi-language repos. They make it super easy to create all the build artifacts I want. However, in many ways, they make the source more difficult to consume for people downstream.


Bazel now has a module system that you can use.

https://bazel.build/external/module

This means your packages are just Git repos + BUILD files.


These kinds of tools are designed to work in monorepos so you don’t really rely on package management like you do with separate repos. This works really well for sharing close inside companies/entities. Doesn’t work as well for sharing code between entities.


If I'm understanding, for the rust specific case, this generates your BUCK files from your Cargo.toml:

https://github.com/facebookincubator/reindeer


> One thing I find annoying with all of these general, language-agnostic build systems though is that they break the "citizenship" in the corresponding language

I mean, this is kind of the whole point. A language agnostic build system needs a way to express dependencies and relationships in a way that is agnostic to, and abstracts over, the underlying programming language and its associated ecosystem conventions.


That is only true if the output is an application.

If the output is libraries for some ecosystem (perhaps with bindings to something written in Rust or C), one needs to be able to build packages that others not invested in that build system can consume.


FWIW, I have a PhD in formal methods and spent a good chunk of that PhD proving stuff about distributed systems in Isabelle. I'm vaguely familiar with Coq. At work I've been largely using TLA+ the last few months to analyze designs of distributed protocols.

What I'll say is a simplification, but I'd describe TLA+ as a one-trick pony, that does that one trick very well. Roughly, you have to structure the model of your algorithm as a transition system, described using a predicate on "these are the legal initial states", and another predicate over two states that says how you can move from one state to the next. To describe these predicates, you get a fairly simple but expressive language at your disposal (a TLA cheat sheet probably fits a single page). This format lends itself well to describing high-level ideas of distributed protocols or concurrent algorithms (roughly what you'd write as pseudocode for those protocols/algorithms). You can then use that same predicate language, plus some additional operators to talk about time (e.g., you can say things like "never" or "eventually"), to specify what your system should do. Finally, you get an automated analysis tool (actually two tools these days, TLC and Apalache), that checks whether your model satisfies the specification. The beauty is that the checking is push-button - after you've finished writing the model and the spec, your work is largely complete. The analysis will have severe limitations; your model can't be too big, and it has to be finite (e.g., if you're writing a model of a distributed protocol, you have to limit the analysis to settings with a handful of nodes), but in practice even this limited analysis weeds out most of the bugs.

Isabelle and Coq are theorem provers (there's also a bunch of other theorem provers). That means, you have to define whatever you're modeling as a mathematical object, and then you go off and prove stuff about it. They're both extremely flexible - you can model a transition system just like TLA does, but you can also talk about any kind of mathematics (algebra, statistics, financial math, whatever). You can also encode programming language semantics, as well as program logics, that allow you both model actual C or OCaml or whatever code, and to specify properties about programs (as mathematical theorems) and prove them in a more comfortable way using program logics. The analysis (which corresponds to proving a theorem) is generally not limited (e.g., you prove a distributed protocol correct for any number of nodes).

The snag is that in Isabelle or Coq the proof is done in a largely manual way (you have to sort of type in the argument by hand, often in excruciating detail). If you want to verify a program with N lines of code, you'll typically end up writing 5-20 times N lines of proof to convince the theorem prover of the correctness of the said program. But to do this, you'll generally have a large library of already proved theorems (6-7 years ago I counted around 200k theorems available for Isabelle), and you will generally have a "metaprogramming language" at your disposal to write your own analysis (i.e., proof) procedures.

For academics, especially programming languages and formal methods people, their work is often writing new programming logics, or new proving procedures, or developing whole new theories. Theorem provers lend themselves better for that kind of work, as well as pushing boundaries, such as doing code-level analysis. But for engineering, TLA can be the 80/20 solution in many cases.


Great answer thanks. Yes this makes a lot of sense. My understanding was that TLA+ also supported theorem proving but it sounds like it is more used as a model checker in practice.


Thanks for your work, Hilel! I've been using TLA extensively in my job the last few months (I work at a blockchain company), and it's been a good run - we found a bunch of issues in several designs and even implemented code (some fairly critical). My secret hope is to get some co-workers to start using TLA themselves (so I can go off to do code-level verification instead hehe), I've organized a couple of internal tutorials but no takers so far - hopefully this free learning resource will help advance that goal :)

On a related topic, does anyone know of a comparison to Alloy 6? I've been meaning to take a day or two to look at it (I tried Alloy out a while ago while I was still in my PhD, but have forgotten most of it), I'm curious to see how it stacks up.


I used Spin a few years back, so my memory is a bit hazy, but I remember Promela (Spin's modeling language) feeling extremely low-level in comparison. It felt a bit like more limited C with non-deterministic choice stuck in there. TLA is a first-order logic language, and the tooling (while not great by modern language standards) felt more pleasant than Spin.

It could be that you get faster model checking with Spin though, I'm not aware of any comparisons.


I bought a Remarkable 2 for note taking and annotating scientific papers. As far as it's promise of "better paper", I think it's only partly fulfilled. I haven't returned it, but I'm sadly also not using it regularly.

It's obviously better than paper since you don't end up with stacks of printed articles. The feel of reading and writing is also exactly like paper, which is great. You get some basic select/cut/paste functionality, which improves on paper. But.

I used it to take notes at work, create simple TODO lists etc. The issue with this is that the resulting notes are way harder to navigate than a simple paper notebook, as the software doesn't support even basic bookmarks. Whereas with paper I can add bookmarks as I please, a paper organizer notebook allows me to navigate my calendar easily etc. I'm still amazed that the RM software offers absolutely no functionality for such navigation. This is a huge drawback for me.

When using it to annotate PDFs, the app offers you a few tools (pencil, pen, eraser, etc). Ideally you'd like to fit the article on the remaining portion of the screen, possibly cropping the margins if your the article is created for a different page size and your eyesight is as bad as mine. But the support for this sucks, as the toolbox always covers a piece of the document, so you end up having to hide/unhide it constantly.

Next, while the pen feels quite good, and feels almost as precise as a real pencil, I find the eraser quite hard to control. It erases an area somewhere around the end of the pen, but unlike a physical eraser, you don't see its boundaries. There's no support for stroke-based erasing. Finally, stating the obvious, the device is black-and-white. But I find different colors helpful when annotating papers, or creating sketches of systems etc.

So right now I can't really recommend the device, but I'm still holding onto mine hoping it will improve. I think the hardware can serve as a basis for better paper - but I think the software would need many of the features of http://www.styluslabs.com/ to get there


Shameless plug: the problems that you describe are almost exactly what we're trying to solve with DAML and Canton [1]. From the Canton whitepaper:

Building distributed applications that involve multiple organizations is hard with today’s technology. For each application, all organizations must agree on the data encoding, transport mechanisms, and interaction rules, and all must implement their part of the interaction correctly, including authentication and authorization. Once implemented, such an application often becomes a silo: there is no general convenient, secure and privacy-preserving way to integrate such applications and compose the business workflows that they automate.

The big problem is bootstrapping the network; as you say, setting up yet another format (DAML in this case) is a huge ask. But we're starting to see the beginnings of a network, as in the next couple of years the Australian [2] and north-bound (i.e., mainland China) Hong Kong securities [3] will be accessible through DAML.

[1]: https://canton.io [2]: https://www2.asx.com.au/markets/clearing-and-settlement-serv... [3]: https://www.ledgerinsights.com/hkex-daml-smart-contract-stoc...


Some financial service providers (i.e. those who do NOT buy low sell high) might try to move into the custodial market. Assessing the risk there is beyond my pay grade I don't know how smart that'd be ... but we can observe it's not common. If that did happen clients could use that provider's IT system to do trading and voila since all data is at one provider, some kinds of reconciliation is easier. In short, market consolidation.

There's tactical steps too that can partially overcome the independent IT systems, say, getting the custodian to send data realtime, sending it realtime on the trader's behalf to the recon entity ... I'm not sure what custodial service provides think about that ... presumably they would if they could probably hampered by their own IT systems like the rest of us.

The final approach is what appears immediately above: traders and custodians --- generally any set of business entities N who need to compose data --- publish to a virtual shared DB. I'm gonna read the white papers and follow-up afterwards.


I don't have a financial services background myself, so assessing I can't really comment on the risk of consolidating custodianship and trading. But market consolidation will at some point run into legal and jurisdictional limits (data sovereignty), if not other ones first (do I really trust that giving you all this data doesn't give you an edge?). The idea behind the virtual shared DB is that you don't have to outsource the data, but you can still transact (with strong consistency) over data shared with many entities.

Like I said, I don't have a background in financial services myself, but there are people on my team who have vowed to never have to do reconciliation again. As you seem to wear the same scars, I'm very curious to hear your thoughts on the whitepaper based on your other good comments in the thread. Feel free to e-mail me (address in profile) or post here.


The chance of replacing every banking system at once is not at all possible. Even just replacing CHESS all at once is pretty much impossible. Just how many years late is it and how many more until it's actually used by all participants?


The idea is not to replace every banking system, but to create a network with a sufficient technological edge over existing solution and sufficient business value.

CHESS is definitely a complex beast. The go-live date has moved from 2021 to 2023, AFAIK at the request of the participants. To your question when it will be directly (as in, through DAML) used by all participants - I guess at the point where it generates enough value that it's advantageous for them to use it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: