Hacker News new | past | comments | ask | show | jobs | submit login

I see this line of reasoning about formal methods a lot: software is big and complicated and hard to get right... therefore formal methods.

On the one hand, I _want_ this to be true both for selfish and practical reasons: selfishly because I'm very very good at learning things that require an academic learning approach (read a book, work some exercises, practice) and if something I'm good at is important, that means more money for me. Practically because they're right, software _is_ big and complicated and hard to get right and as a practitioner, it's really frustrating when it does fail and I'm scrambling to figure out why.

On the other hand, though, nobody ever seems to make a compelling case for how formal methods proposes to solve that problem. This author actually does better than most by pointing out how most modern "design" is a waste of time but doesn't really go into why TLA, say, is better than (demonstrably mostly useless) UML. There's sort of an implied suggestion that if you dedicate a few months (or years?) to learning TLA, you'll reach enlightenment and understand how it's helpful in a way that's impossible to articulate to the unenlightened. And that's not impossible to imagine! Calculus and Bayesian statistics are kind of like that; you need to really understand them before you can really see why they're useful.

I always find myself left applying what I call "project manager" logic - I need to make a snap decision right now about whether or not I should invest time in this "formal method" stuff or not so I have to be observational: if it was really that helpful, more people would be applying it and the benefits would speak for themselves. They've been around a long, long time, though, and never caught on - it's hard not to conclude that there's probably a reason.




IMO UML is useless because whatever diagram gets produced means different things to different people and it's very complex but not check-able so people can make UML diagrams that are self contradictory or nonsense.

I find myself using a "method" of some kind when faced with a problem that's hard to think about. A communications protocol - nice to have a state machine to describe it for example. TLA obviously fits that niche even better. I've been lucky enough not to have too many problems recently that felt like they justified that effort but it's of incredible value when one does. Domain Specific Languages - so much better to use a parser framework than hand-code if you want to avoid all sorts of undesirable outcomes.

Currently most of my rework comes from changes to the requirements and our "customers" not really knowing what they want till we give them something and they say "not that."

This is partly because the people asking for things don't fully think out all the implications of what they're asking. It's mostly about not having enough knowledge in one place to make good decisions on.


>if it was really that helpful, more people would be applying it and the benefits would speak for themselves. They've been around a long, long time, though, and never caught on - it's hard not to conclude that there's probably a reason.

My impression is that there are actually not that many business domains where a large investment in time and money to get domain logic correctness from 98% to 99.99% correct is actually called for.

Formal methods are a large investment, too. No two ways around it.

Also, while they havent really caught on in general, some of their ideas have made it into modern type systems.


> Formal methods are a large investment, too. No two ways around it.

My issue with the entire discussion, and lots of the community is that formal methods are not all the same.

Some are expensive, yes. If you insist on doing them all, you'll never finish anything. But that's not a reason to dump the entire set.


My issue with the formal methods community is that it always tended to hand wave away the cost/benefit trade offs. It's not that they didnt see those trade offs (they constantly reiterated that they weren't suitable for every problem) - more that they saw it as someone else's problem to solve to decide which problem, when and why.

Im sure if somebody wrote a blog post that provided a sensemaking framework for WHEN formal methods are appropriate vs waste your time it would be popular.

I see hints of this all over the thread - e.g. some people seem to believe it works well for state machines (though I suspect they were nontrivial ones), but as a topic it rarely ever seems to be addressed head on. OP certainly doesnt.

I suspect this trope will reiterate again next year with "formal methods! useful! sometimes!". The community has been like this for decades.


I only am familiar with formal verification in the context of a hardware class, which is like programming but the cost:benefit is wildly different (can’t fix a physical chip after it has been fabricated very easily) and the types of designs are very different.

But, the impression I got was that the rigors of the formal verifier would sort of impose a limit on the complexity of the design just based on… completing in a reasonable timeframe and in a reasonable amount of memory space. Maybe the true victory of demanding formal verification would be fixing:

> software is big and complicated and hard to get right

By making big complex programs a pain to work with, haha.


If we’re going to boil this frog, we need to steal wisdom from TLA, not teach it. Type systems have borrowed a lot from Hinley-Milner, and are themselves a formal, partial proof.

I think I’d like to see a descendent of Property Based Testing that uses SAT or TLA techniques to rapidly condense the input space in a repeatable fashion. We should be able to deduce through parsing and code coverage that passing 12 to a function can never follow different branches than 11, but that -1 or 2^17 < n < 2^32 might.


There’s decades of research in this vein fwiw, usually referred to as symbolic execution and it’s descendants like concolic execution.


> if it was really that helpful, more people would be applying it and the benefits would speak for themselves

Well... that's not a good logic on any area, and on software development it's twice as bad.

Most software projects still fail. And that's not "market failure", it's just "didn't manage to build it".


UML is far more useful than TLA+, at least from the point of view of agile delivery projects.

Starting by at least with UML everyone, regardless of their role, can have some kind of understanding of what those diagrams are about, and UML covers most scenarios from software delivery product lifecycle.

Whereas TLA+ is mostly abstract code that after proving a specific algorithm is right, has to be manually ported to the actual software development stack being used, with the caveats a manual translation implies, and only applies to algorithm logic, nothing else.

UML has a place at my work, TLA+ will never get one.


For Calculus you can easily explain to the unenlightened that many physical and engineering laws work in terms of speed and acceleration of measurable values, and Calculus is a mathematical framework for working with this.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: