Hacker News new | past | comments | ask | show | jobs | submit login

No, it would not be nice. It would be terrible.

What you're advocating is the "smart contract" delusion.

You're aiming to replace the messy uncertainty of conventional human-mediated legal agreements with the clear, deterministic rigour of computer code. Have you seen how bug-free computer code is in the real world?

Smart contracts are a perennial fascination for technologists working in an area where everything can be determined cleanly and clearly, if only in principle. So the prospect of using computers to sort out all those annoying grey areas in human interaction is tempting: if you don’t understand law (which involves intent) but you do understand code (which does precisely what you tell it to – though probably not precisely what you meant), then you will try to work around law using code.

(You know the uncertainty you're feeling about your life being run by legal documents you don't understand? That's how other people feel about their lives being run by your code.)

Legal issues just don’t work that way – the purpose of law is not to achieve philosophical truth, but to achieve workable results that society can live with. It regularly shocks technologists that the law is fuzzy, circumstantial and indeterminate, and that arbitrary strings of data can in fact be tagged with the intent behind them at the time. Because what a human meant, what they were thinking at the time, is a lot of the point.

For smart contracts to work as advertised, we would need to create a human-equivalent artificial intelligence to understand what people meant the contract to do.

A much-touted advantage of smart contracts is that the code is public, so anyone can check and verify it before engaging with it. The problem is that it is extremely difficult to tell precisely what a program might possibly do without actually running it. (Shellshock lurked in bash for 25 years without being spotted.)

Smart contracts work on the wrong level: they run on facts and not on human intent, but real-life contracts are a codification of human intent, which will always involve argument and ambiguity.

It's times like this I miss Groklaw.net.




I completely agree. Wouldn't it be nice if code did what we meant instead of what we said too though?

Not everything can be solved with code, but it's fun to think about, and leads down some interesting paths, even if they're ultimately impossible or impractical.

What if we could encode ambiguity? Legal code could have some wiggle room. Of course, you'd want to be able to do things like ask 'Is X legal under Y circumstances' and receive an answer, and at that point you effectively have human-equivalent artificial intelligence.


Well said!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: