This project aims to develop automatic program verification methods that help security engineers to understand software that they have not written themselves. The engineer will be able to make sophisticated queries about resource requirements and temporal behaviour of code, such as about memory safety, privileges, or information flow. Our methods will even support synthesis of behavioural properties for the engineer: rather than make a closed-world assumption, where the complete program and physical computing device are known, our tools will discover logical descriptions of execution environments (preconditons, protocols, invariants, etc.) that pinpoint the assumptions necessary for code safety or those that trigger violations. Such tools would aid engineers by, for example, advising where to concentrate effort when looking for critical security breaches. They would also suggest where to place effort in hardening an application. Finally, by using strong analysis techniques based on verification, guarantees of security properties could be obtained, as well as flaws discovered.
Towards realising this vision we have assembled a team whose experience ranges from program verification research on logics and algorithms to systems security research involving new operating system primitives and software structuring principles that achieve robust security goals.