Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This might be a stupid question, but why a separate programming language rather than aiming to verify/synthesize invariants in languages people use?


Not a stupid question at all. There are two reasons verification tends to happen in these specialized languages: the languages we usually use are often not expressive enough to write things like specifications, and a bit too expressive in the sense of letting people write program logic that is insanely difficult to verify (think untyped pointers into a dynamically allocated heap for example). So these verification related languages often are more expressive on the spec side and more restrictive in terms of what kind of code you can write.


Yeah I can see pointer weirdness being an issue.

As for being not expressive enough for specifications, isn't the code itself a form of specification? :)


Yes, but the quality of the spec varies. For example many (most?) C programs have undefined behaviors which means the spec is incomplete and unreliable. Dafny gives you better tools to avoid this. So in the end you get a higher quality spec with Dafny.


> … verify/synthesize invariants in languages people use?

Good question. This is the holy grail. This is what everyone in PL research would love. This is where we want to get to.

Turns out a language as “simple” as C has sufficiently complicated semantics as to limit rigorous analysis to the basics. One example is loop analysis: it’s very useful to know that a loop will terminate eventually; if a loop is modifying some state and—worse—if the iteration variable gets modified—kiss your analysis goodbye because mechanically synthesizing strong pre- and post-conditions becomes insurmountable. It’s not an engineering challenge. It’s a math/pure CS theory challenge.


Dafny seems to have loops too, and the way it solves the problem you mentioned is forcing the user to write these invariants.

I assume if you were to develop such a system for C, C++, or Rust you'd similarly expect the user to do this.


Right. The problem is that those languages are relatively permissive in their type systems. Obviously Rust can capture more in its type system than C can. You would probably want a type like “decreasing unsigned integer” for Rust and some way to enforce monotonic decreasing, which Rust doesn’t give you.

(Any experts on formal verification please correct any inaccuracies in what I say here.)

The upshot of it is that C, C++, and Rust permit too much behavior that isn’t capturable in the type system. Thus, the properties that you’re interested in are semantic (as opposed to syntactic; type systems turn semantic properties into syntactic ones) so Rice’s theorem applies and there’s no computable way to do the analysis right.


You are answering a question you want to answer, not the one I asked! :)

Yes, dependent types can encode nice constraints, but so can asserts and assumes.

I am not seeing the fundamental difference in yeeting these constraints to a solver. Dafny seems to do the same thing with Z3.


Most existing mainstream languages aren’t expressive enough to encode these invariants. For languages outside of the mainstream, Lean 4 is a language supporting verification, and it’s also a full programming language, so you can write your proofs/theorems in the same language that you program in.


What's an invariant you can not encode in a general purpose programming language?

I'd have assumed, by virtue of being Turing complete, you could express any invariant in almost any language?


In most languages you can express any invariant, sure, but you can't prove that the invariant is upheld unless you run the program.

For example a NonNegativeInteger type in most languages would just have a constructor that raises an exception if provided with a negative number. But in languages with proofs, the compiler can prevent you from constructing values of this type at all unless you have a corresponding proof that the value can't be negative (for example, the value is a result of squaring a real number).


That's what we use when we can limit our loop count and recursion depth somehow. Prove it for small data, and infer from it for big data.

I use C and C++ model checkers, like cbmc and its variants (esbmc) successfully, but you need to adjust your tests and loops a bit. Like #ifdef __VERIFIER__ or #ifdef __CPROVER__ https://diffblue.github.io/cbmc/cprover-manual/index.html


There seems to be a bunch of papers and tools that extend CBMC for unbounded loops with k-induction!


The semantics of Dafny is carefully designed to make verification efficient.

Dafny can compile to and interface with a few languages, including C#.


What does it mean for verification to be efficient?

Are there benchmarks showing dafny is faster than other inefficient options ?


Dafny and similar languages use SMT; their semantics need to be such that you're giving enough information for your proof to verify in sufficient time, otherwise you'll be waiting for a very long time or your proof is basically undecidable.

I'm not sure about benchmarks comparing languages, but Dafny goes through a lot of tweaking to make the process faster.


Dafny has been around for a while and people do in fact use it. People also apply contract languages to C and all matter of other things, so really question boils down to "Why arent you doing what I expect of you?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: