Hacker News new | past | comments | ask | show | jobs | submit login

I can only ask a question, since i've got no know-how about this.

How would such a proof checker know the intent of the code of not having backdoor/insecurity? How do you define security in a logical sense?




> How would such a proof checker know the intent of the code of not having backdoor/insecurity? How do you define security in a logical sense?

As you surely know, there is no such thing as secure/insecure, but a set of properties that are considered in IT security, which might even not be fullfillable all at the same time. The exact words vary depending on the book, but a popular classification in a German standard reference on IT Security is

General protection goals:

- Integrity

- Confidentiality

- Availability

Further goals are

- Authenticity

- Acountability

- Non Repudiation

- Sometimes Anonymity

So you first begin to formulate your protection goals. Next you can begin to forumulate this in a logical sense. Again there can exist quite different axiomatic implementations of a general protection goal.

But since you want to see examples:

A common model in literature for integrity is the Biba model: https://en.wikipedia.org/wiki/Biba_Model

Another common model for integrity is the Clark-Wilson model: https://en.wikipedia.org/wiki/Clark%E2%80%93Wilson_model

The mathematical dual of the Biba Model is the Bell–LaPadula Model for ensuring confidentiality: https://en.wikipedia.org/wiki/Bell%E2%80%93LaPadula_model

A model that ensures that the access rules will be free of conflict of interest (this of cause involves privacy and integrity aspects) is the Chinese Wall Model: https://www.cs.purdue.edu/homes/ninghui/readings/AccessContr...


Thanks! This was exactly what I was looking for!


There's a lot of work in CompSci on this that predates all the blockchain-related stuff. They usually call it "security-typed languages" or "information flow security." Here's a paper with a lot of prior work in that plus an example tool and language:

https://www.cs.cornell.edu/andru/papers/jsac/sm-jsac03.pdf

https://www.cs.cornell.edu/Projects/fabric/

The concept was taken to the binary with Necula's Proof-Carrying Code. Work like Kumar's and Davis' eliminated most of the TCB of the provers, too.

https://www.utdallas.edu/~hamlen/Papers/necula97proofcarryin...

http://www.cse.chalmers.se/~myreen/runtimes.html

In parallel, there's always work in capability-based security applied to programs or OS's. Here's an old and new one using that model.

http://erights.org

https://www.ponylang.org


The main assumption is that the specification (the proposition you want to prove) is (much) simpler than the proof. This means that the user/customer can read the specification, and understand what it means, even if the proof is not comprehensible to them. Of course, the proof must then be machine-checkable.

This makes the clarify of specifications into a market-driven problem. What do you do if you have a formally proven contract, but nobody can understand what the spec is? Well, you probably won't be able to sell it, because nobody will trust you. Instead, you try to simplify the specification, so you have something that people will trust. This is very similar to established practice of mathematics, where getting a proof accepted is really a social process, in which you try to convince your fellow mathematicians that the proof is correct.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: