Sunday, February 15, 2009

The Role of the Executable Design

Stu Stern ( raised the question: "What would the executing design itself provide?" That's a big question, and I wanted to think about it in a longer context than "comments".

One role of an executing design is to verify the design.  Recalling the original motivating idea: an executable design which actually does what you say it does validates the design--it gives you a way to be sure that what you're handing to coders is actually going to be useful to them, and is going to work as envisioned.  I think this is hugely important--even if you can't ever use the executing design in the finished product, you get strong assurance that you're not going to start building something which won't work. Remember that mistakes in design cost a lot less to correct than mistakes in code--so spending a little on tools and time in design is quite likely to save money downstream--in coding, deployment, and operations.

In the design verification role, execution tools must then carry all kinds of simulation capabilities. In complex problem domains, they must be able to handle very complex class definitions and interactions.  In large systems, they must be able to simulate network latencies, distributed transaction failures, large loads, and similar system events. In short, in an executable design suite, this would be a different set of tools than those which might use the executing design as a framework for a finished product.

A second role is to constrain and inform the design.  Almost everywhere I work, designs are assembled in linear fashion as design documents (there's a good article, somewhat tangential to this discussion, in the January 2009 issue of Communications of the ACM: the "Ontology of Paper": The author argues that paper design documents are an idea whose time has passed.)  Design fragments expressed in UML are exported from the design tool as images and pasted into the document.  The whole thing is handed off to implementers for coding. These folks proceed to translate those diagrams and associated text into classes and subsystems and packaging.  When the design changes (as it inevitably must, when requirements become better understood and original design approaches don't pan out as expected), the loop is seldom closed by updating the design.  When the application moves to maintenance, the coupling between requirements and implementation quickly becomes hard or impossible to discern. Maintenance costs much more than it should, as new designers and developers spent far more time than necessary trying to understand what's there so they can update it intelligently.  The problem is exacerbated in agile projects, which are, effectively, in maintenance from the first day a new programmer joins the development team or a key one leaves.

If the design itself formed a key part of the finished deliverable--it's skeleton--then the design would always be in sync with the deliverable, and the extra step of translating what has already been built in design space into code space would be skipped.  What a boon for developers, designers, and maintainers!

A third role might be to constrain the implementation--to complain if the implementation diverges too far from the design.  I'm not sure how this would work, though I have some ideas, which I'll flesh out in future posts.


  1. "The design itself formed a key part of the finished deliverable--it's skeleton--then the design would always be in sync with the deliverable."

    In earlier posts you talked about developers directly coding implementations of the design. I'm assuming that means they don't go through a design tool to produce the code. Does this imply that the design can be reversed from manual code as well? What will keep the design from being obscured by the kind of detail that starts showing up then? I can see an argument that if things are looking like a mess in the reversed design it's a good warning that the code is getting too tightly coupled at least in the case of dependencies between objects or modules, but there are other kinds of necessary complexity. An example would be all of the methods and attributes that developers add to a class that aren't really relevant to the high level design, but are needed to make the class really do it's job.

    I can see where an executable design approach can work in a prototype or new development, but I would like to look at some concrete examples of a system two or three years into active maintenance and extensions and see how the idea pans out. I know you've been working on this kind of thing for awhile, do you have some examples of the kinds of things you've seen and how this approach would deal with them?

  2. I shall respond Wednesday at noon. ;) However-- two immediate thoughts: 1) Design is still an intellectual activity; tools can't change that, so reverse engineering a design still has to be done by picking through the reverse-engineered stuff to "expose" the design, and 2) inserting a tool into an existing code base is refactoring.