Thursday
Mar042010

The Semantics of Asynchronous Exceptions

John Launchbury presented a Galois Developer Symposium lunch on March 4th, 2010, where he presented the semantics for asynchronous exceptions in Haskell. In particular, discussion concentrated on the interactions between Concurrent Haskell, laziness and exceptions.Follow the talk online, or download the pdf.

Click to read more ...

Friday
Feb192010

Tech Talk: Modern Benchmarking in Haskell

Details:

  • Title: Modern Benchmarking in Haskell (slides)
  • Presenter: Don Stewart
  • Date: Tuesday, 23 February 2010
  • Time: 10:30am
  • Location: Galois Inc.421 SW 6th Ave. Suite 300, Portland, OR, USA(3rd floor of the Commonwealth building)
Abstract: Thanks to work by Bryan O'Sullivan, there has been a renaissance in performance benchmarking tools for Haskell, built upon Criterion."Compared to most other benchmarking frameworks (for any programming language, not just Haskell), criterion focuses on being easy to use, informative, and robust."Criterion uses statistically robust mechanisms for sampling and computing sound microbenchmark results, and is more stable in the presence of noise on the system than naive timings.Criterion has in turn spawned some extensions:
  • Progrssion: compare different criterion graphs
  • NoSlow: a new array benchmark suite based on Criterion
In this talk I will present these tools, how to use them, and how to make your performance benchmarks in Haskell, or languages Haskell can talk to, more reliable.Bio: Don is an Australian open source hacker, and engineer at Galois, Inc, in Portland, Oregon, where he works on creating trustworthiness and assurance in critical systems with an emphasis on language design and compiler techniques. Don is co-author of the book, Real World Haskell, published by O'Reilly, and the XMonad window manager.
Galois has been holding weekly technical seminars for several years on topics from functional programming, formal methods, compiler and language design, to cryptography, and operating system construction, with talks by many figures from the programming language and formal methods communities. The talks are open and free. An RSVP is not required, but feel free to contact the organizer with questions and comments.

Click to read more ...

Thursday
Feb112010

Tech Talk: Introduction to Aarne Ranta's GF, the Grammatical Framework

The talk will be presented by Iavor Diatchki on Tuesday, February 16th, at 10:30am.(slides)Abstract: The Grammatical Framework (created by Aarne Ranta) is a programming language for multilingual grammar applications. It may be seen in a number of different ways:

  • as a special-purpose language for grammars, like YACC or Happy, but not restricted to programming languages;
  • as a functional language, like Haskell or SML, but specialized to grammar writing;
  • as a logical framework, like Agda or Coq, but equipped with concrete syntax in addition to logic;
  • as a natural language processing framework, like LKB, or Regulus, but based on functional programming and type theory.
This talk is an introduction to GF's basic concepts by example. We will look at how to define the meaning and syntax of a language, perform simple translations, define semantic properties, and how to use GF together with another language such as Haskell.Bio: Iavor Diatchki is a R&D Engineer at Galois, Inc. with a Ph.D. from the Oregon Graduate Institute.Details:
  • Date: February 16th, 2010, Tuesday
  • Time: 10:30am
  • Location: Galois Inc., 421 SW 6th Ave. Suite 300 (3rd floor of the Commonwealth building)

Galois has been holding weekly technical seminars for several years on topics from functional programming, formal methods, compiler and language design, to cryptography, and operating system construction, with talks by many figures from the programming language and formal methods communities. The talks are open and free. An RSVP is not required, but feel free to contact the organizer with questions and comments.

Click to read more ...

Friday
Jan292010

Tech Talk: An Introduction to the Maude Formal Tool Environment

The talk will be presented by Joe Hendrix on Tuesday, February 2nd, at 10:30am. (slides)Abstract: There is a great deal of interest today in developing multi-purpose environments that combine declarative programming, with specification languages and useful automated analysis techniques. In this talk, I will survey one such an environment: the Maude system. I will start by describing how to program in Maude with a focus on its support for rewriting modulo axioms. After some examples, I will also survey some of the different analysis tools developed on top of the Maude system ---- including the model checker, inductive theorem prover, and an extension to the core language for modeling systems that operate in real-time.Details:

  • Date: February 2nd, 2010, Tuesday
  • Time: 10:30am
  • Location: Galois Inc.,  421 SW 6th Ave. Suite 300 (3rd floor of the Commonwealth building)

Galois has been holding weekly technical seminars for several years on topics from functional programming, formal methods, compiler and language design, to cryptography, and operating system construction, with talks by many figures from the programming language and formal methods communities. The talks are open and free. An RSVP is not required, but feel free to contact the organizer with questions and comments.

Click to read more ...

Thursday
Jan212010

POPL 2010, Day 1

Here are some talks that caught my attention on the opening day of POPL 2010.Reconfigurable Asynchronous Logic Automata, Neil GershenfeldIn this invited talk, Neil proposed that we throw out the sequential model of computation from Turing and Von Neumann, and re-imagine computing from the ground up, more along the line of physics as it is manifested in the world. This might be the way forward given the huge power demands that will arise from traditional architectures as they become more and more powerful. Rather than imposing a top-down structure on the machine, Neil propose we consider systems built up from cellular automata. In ALA each cell is able to do a basic logic operation or, in RALA, reconfigure other cells. Built around 2D grids, and computing in parallel without any global clock, the automata are able to compute many basic functions in linear time, including multiplication and sorting (of course, they also take a linear number of cells to do so).What I liked about this talk is that it called on me to think more broadly about the nature of computation. I don't think I buy the line that how we do computing today is wrong. But I do believe that computing and information theory is going to become more and more important in the development of many other subjects, including physics and biology. I also believe that we will need to harness many more structures to do computing rather than simply relying on gates burned in silicon. Coincidentally, at lunch today Luca Cardelli was telling me about his explorations in computing with DNA. Fascinating stuff!At the lowest levels, the primitive components of computation reflect the computational substrate, so they are very different from one instantiation to another. As primitive computational elements become combined, however, the different descriptions become more abstract, and start to look more and more alike. The same mathematical operations may arise in each form, for example. That makes me wonder what the space of truly computationally-neutral specifications looks like: how might we program in such a way that we really don't have a preference for what substrate we map the computation onto. We already consider FPGAs, GPUs and CPUs as standard targets for languages like Cryptol. Let's conceptually add DNA and RALA to the list, and see if that would have us change anything.A Simple, Verified Validator for Software Pipelining, Jean-Baptiste Tristan & Xavier LeroySoftware pipelining is a compiler transformation that reorders loop code to make more efficient use of the CPU and memory accesses. Code from many different iterations of the loop might be drawn together and overlapped to execute out of order, and perhaps in parallel. The question is how to verify that the pipelining has been done correctly.The paper proposes using symbolic evaluation. It describes an equivalence principle which requires proving two separate commutativity properties. These may both be shown by symbolic evaluation of the instruction code, being careful to handle the small differences (such as different temporary variables in the two paths). Overall the process is simple enough that it could be a standard component of any compiler, even if the compiler is non-verifying overall.A Verified Compiler for an Impure Functional Language, Adam ChlipalaHandling object-level variables in proofs is a real pain, one that is hard to eliminate. This paper has an interesting way of dealing with them. It describes a Coq verification of a compiler for Untyped Mini-ML with references and exceptions, compiling to an idealized assembly code.Now, using explicit object variables leads to lots of lemmas about substitution. On the other hand, full higher-order abstract syntax (HOAS) would lead quickly to non-termination, and hence unsoundness. So instead, the paper introduces a closure semantics in which variables are just numbers pointing into a heap. Many operations that would reorder substitutions (and hence require big proofs) don't affect the closure heap, so the proofs go through easily.The paper also uses a high degree of proof automation: it has a generic proof script that looks at the hypotheses and goals, and decides what approaches to try. The proof script is a bit slow because the evaluation mechanism of Coq wasn't designed for this particularly (but see my blog post from PADL about accelerating normalization). However, because the proof is highly automatic, it is not hard to evolve the compiler and proof simultaneously. In fact, the compiler and proof were developed incrementally in exactly this way.Verified Just-in-Time Compiler on x86, Magnus MyreenJIT compiling has become standard for accelerating web pages, amongst other things. Good JIT compilers can take into account the input to the program as well as the program itself. But it is hard to get them right, as witnessed apparently by a recent Firefox bug.An aspect that makes verification tricky is that code and data are mixed: a JIT compiler essentially produces self-modifying code. The paper provides a programming logic for self-modifying code by adapting Hoare triples so that there is both "before" and "after" code. Some practical issues also need to be handled: for example, the frame-property has subtleties regarding the instruction cache, and code-updates can sometime happen too recently to show up in the current instruction stream. The x86 documentation is a little vague about how soon code-modifications can be relied on to be present.The end result is a verified JIT compiler from a toy input byte-code language, but mapping to a realistic semantics for x86. The compiler produces comparable code quality to C for a GCD example.

Click to read more ...