Friday, September 3, 2010

New paper about Worlds

Here's the link:

And here's what's new compared to Chapter 4 of my dissertation:

  • a better semantics for the commit operation — it now includes a serializability check that makes programming with worlds much safer/saner,
  • a new section that describes a more efficient, Squeak-based implementation, and
  • a neat case study that shows that our new implementation is fast enough to do interesting things with worlds.
  • Thursday, January 8, 2009

    Dr. OMeta

    On December 23rd, 2008, I filed my Ph.D. dissertation, "Experimenting with Programming Languages"!

    Use of Prolog for developing a new programming language

    Some time ago I came across a really interesting paper, "Use of Prolog for developing a new programming language", by Joe Armstrong et al. Definitely worth a read.

    It tells the story of a group of crazy Swedes who decided to use Prolog to build a prototype of this language that they were working on. They used this prototype to do lots of experiments, and because it was nice and tiny, they were able to rapidly evolve the language and its implementation. Rinse and repeat. (It was only about three years later that they finally had to do a "real" implementation.)

    It's a great story about how great life can be when you ignore conventional wisdom and avoid premature optimization at all costs... BTW, the language was, of course, Erlang. :)

    Wednesday, September 24, 2008

    The Omnidebugger

    We intend to use a number of different programming languages in the STEPS project. Even right now, if we only look under the hood (in what we call the "engine room"), we're already using three: Pepsi, Coke, and an OMeta-like language for parsing. Coke plays a special role among these languages, since the semantics of the other languages are defined via translation into Coke.

    One thing programmers—especially Smalltalkers—really care about is debugging. Coke has recently started to get some debugging capabilities, and we can expect that these will soon be on par with (hopefully even better than) Smalltalk's. This is definitely a good thing, but we can't stop there; the other languages also need support for debugging, and making this work in under 20K LOC is not a trivial task.

    Consider a JavaScript implementation on top of Coke, for instance. Our translation might map a single JavaScript statement to a group of three or four Coke expressions that must be evaluated in sequence. So if we use the Coke debugger on the code generated for a JavaScript program, its notion of “single-stepping” won’t make sense at the JavaScript source level. Similarly, inspecting the temporary variables on the stack won’t work unless JavaScript’s temps are represented directly as Coke temps, which may not be the case. Things are even worse for languages whose semantics are significantly different from Coke’s. For example, a debugger for Prolog should support all kinds of features (e.g., unification) that don’t really make sense in the Coke debugger.

    The “conventional” way around these problems would be to implement a separate debugger for each language, which clearly isn’t good enough for STEPS. But what if we went with a kind of pluggable debugger architecture that allows each “Language X”-to-Coke translator to associate inspecting and debugging functionality, along with all kinds of useful meta-data, with the Coke parse trees it generates? (The Coke compiler would of course have to maintain these associations when it converts the parse trees to code.) This would enable a single debugger implementation, including its GUI, to customize itself to the language that is being debugged. It would also enable programmers to debug the same piece of code at different levels of abstraction (e.g., at the JavaScript level, hiding all “scaffolding” or at the Coke level, in gory detail), which would be extremely valuable to language implementers.

    Scheme programmers often implement little DSLs using macros, so I hope that we can get some inspiration from Dave Herman's debugging library for PLT Scheme, which seems very interesting.

    Thursday, September 11, 2008

    Worlds: Controlling the Scope of Side Effects

    Here is a (very informal, not conference-style) paper about worlds, which is something that I've been working on lately. I'm pretty excited about this stuff... Please let me know if you have any comments, suggestions, etc.

    Tap, tap, tap — is this thing on?

    Update: Chapter 4 of my dissertation is an improved version of this paper. It contains more examples, a formal semantics for property/field lookup in the presence of worlds, and a proper Related Work section.

    Saturday, September 6, 2008

    Modified V8 Shell w/ translateCode

    This week Dan Ingalls came down to visit us at VPRI, and we got to chat about V8 for a little bit. We're both really excited about it. "It's like they just gave you a new horse", he said. :)

    So tonight I was looking at the source code of the V8 shell, and it turned out to be pretty easy to modify it to support my translateCode idea. In case you don't know what I'm talking about, there really isn't much to it: translateCode is just a user-defined function that's called (implicitly) by the shell/workspace. It takes as an argument the code that the user typed in, and returns the code that should run in its place. It's extremely useful for playing with source-to-source translators. Here's a link to my modified, if you're interested.

    The following transcript shows this modified shell in action. ometa-rhino.js is just a file that loads the OMeta/JS implementation and defines translateCode just like I do in the browser, i.e., so that both OMeta and JavaScript are accepted.
      ./v8 ometa-rhino.js --shell
    V8 version 0.3.0
    > ometa M { ones = (1 -> 2)* }
    [object Object]
    > M.matchAll([1, 1, 1, 1], "ones")
    [2, 2, 2, 2]
    Now if only these guys at Google would hurry up and get Chrome to work on my Mac...