Friday, December 11, 2009
Introducing SEMAT
Software "engineering" as it is practiced where I usually have worked, seems to be 75% gut feel and 20% anecdotal experience, with the remaining 5% subject to real analysis. I am committed to bringing at least some of the discipline of field engineering (those parts I understand well, and can adapt) to the practice of producing software.
A group of the leading lights in methodology and architecture have come together to form SEMAT -- an acronym of "Software Engineering Method and Theory", with the goal to "refound software engineering based on a solid theory, proven principles, and best practices". It would sound like so much smoke, but consider some of the signatories:
Scott Ambler
Barry Boehm
Bill Curtis
James Odell
Ivar Jacobson
Erich Gamma
The list is quite long, and luminous.
I suggest you go have a look: http://www.semat.org. I'm looking forward to finding out what Scott Ambler and Ivar Jacobson can agree on.
Wednesday, October 7, 2009
BigDecimal
Sunday, October 4, 2009
Design Constraints
I started with a working definition John Prentice gave me--heavily paraphrased, from memory: architecture is "enough design that it's clear the system requirements can be met", and design is "enough design that the path code will follow is clear". Bryan Helm suggested that architecture should also constrain design so that "good design choices will be the natural ones; bad design choices will tend to be excluded", theory being that architects tend to be senior designers as well.
There's a lot of good discussion of "what it is" on the web; I've provided some links at the end for articles I found particularly illuminating and useful.
Within the context of executable design, the group of us agreed we'd all been dealt horrible or irrelevant architectures and designs, and one hallmark of a good architecture, and also a good design, is that you could rough out code that followed the architecture, and it would execute as expected. Similarly, you could implement the design exactly, and it would execute as expected. In both cases, obviously, the system would be grossly incomplete--but the specification at the design level would be complete enough that all the functions required of the system were stubbed out and ready for detailed implementation.
I'm not sure the distinction between architecture and design is important from the point of view of a set of executable design tools. I think architecture should constrain design, and design should constrain construction. Given the way code is actually constructed, the architectural description and the design description must be maintained as the software is developed. What better way to do that than making key portions of these descriptions the armatures upon which code is developed? Code generation isn't the way to go, because you have to keep round-tripping, and you lose the abstraction provided in the design level.
The only way I can see to allow the design to constrain the construction is to use the design itself as the runtime environment for the final code. In the Java and .NET worlds, this means custom class loaders sitting between the developed code and the design, verifying conformance to the design and allowing the design to execute directly if code hasn't been provided to implement it. In this way, you can actually test the design as it is developed, in the same way you test code as it is developed: by running it.
There are many good articles; here's a jumping off point:
http://www.ibm.com/developerworks/rational/library/feb06/eeles/
http://www.bredemeyer.com/whatis.htm
http://www.itarchitect.co.uk/articles/display.asp?id=358
Sunday, September 20, 2009
Agile, but...
These practitioners have a set of expectations around both process and performance. The problem I have is a huge lack of comparative data. When a team is delivering software, it's the rare business that's willing to risk productivity for an experiment to answer questions like "if they can produce x per week under process y, does x decrease if we drop agile practice 'z'?".
When I joined my current project, we were being managed in a barely-recognizable devolvement from scrum. We stopped doing stand-up meetings daily about 4 months ago; we stopped doing status meetings every other day about 8 weeks ago.
Is that negative stuff? I don't have any data either way, but I don't think so. In fact, I think we could argue that we are using exactly the agile things this team needs and can use effectively:
I delivered a formal design document about 2 weeks ago, just in time to start coding for this release cycle. We don't do much formal design, though; we mostly work from a domain model, process specifications, and FitNesse tests. Sometimes, though, you need words-in-a-row and UML diagrams.
We went into code freeze last Friday, limiting check-ins to bug fixes and test fixtures. The following Monday we received several new requirements. Since code freeze, there have been updates to something like 9 process specs. Most, if not all, of those changes will be in the release. We'll probably be in code freeze for two or three weeks, which is a lot in a six-week cycle. We're treating the new requirements as bugs, and to be fair, they're about the size you'd expect from a functionality bug.
We're delivering new functionality at maybe three times the rate of most projects I've been associated with, and at probably 6 times the rate of a typical project in this organization. It's pretty high quality stuff, too, in an environment with hideously complex business rules and very high cost for failure in implementing those rules. Frankly, I'm pretty happy about that.
Here's what we've got that works:
1. very senior, very fast developers. (Plug: if they're from Gorilla Logic, as I am, they couldn't be anything else, as that's a primary focus of Gorilla Logic's hiring practice.)
2. dead quiet when it's time to concentrate. I have a private office. I'm not physically in the same state as the users. In fact, I'm two time zones away. When it's time to crank out well-thought-out results, nothing can beat quiet. (For experimental data on this topic and other key contributors to developer productivity, read "Peopleware" by DeMarco and Lister.)
3. good access to the team when it's time to talk. We have web conferencing, IM, telephones, and desktop video conferencing when we need it.
4. requirements people who aren't bound by the need to generate a document to get the requirements to us, who are highly accessible, and who know the domain quite well.
5. management who understands the importance of getting it right, and respects the professional opinions of the team.
The release will probably slip a week. When we're done, it'll be pretty much what they needed, rather than arbitrarily "what we had working on the release date". We might deliver faster with a different management style, but I wouldn't bet my next paycheck on that. If the problem domain was simple enough to allow for stable requirements, they might not need the software in the first place.
That's agile enough for me.
Wednesday, September 16, 2009
"Really Good Programmers"
"Really good ruby programmers can write code that is so clever that no one less talented than them has a hope of understanding it."
He really should know better.
Really good programmers can, indeed, write code so clever that no one less talented than themselves has any hope of understanding it--but they never do. That's the sort of thing really talented coders do before they become really good programmers. And then, in the code review, their peers tell them to take out the clever bit of code and make it maintainable, thereby vastly improving the code.
A decent piece of software has a lifetime measured in tens of years. During that time, it's likely to be looked at a dozen times for new features, bug fixes, and efficiency updates--not to mention the occasional port. If it took a day to write it initially, it'll probably spend many times that under someone's lens while it's in the maintenance phase. Taking 12 hours to write clear, comprehensible code instead of 8 hours to do something clever will save time in the long run. Is it always worth the extra time? No--but it is often enough that I try to do it every single time I code something.
Strange clever code is fun to write. I write a little self-modifying routine one time, both to prove I could do it, and to solve a timing issue--I counted op-codes and found I could squeeze pattern recognition between bytes on a 19.2kbps serial connection by making the polling routine self-modifying. Nowadays I'd comment it like crazy, and advise that it be removed as soon as CPU speeds increased enough to do so. Back then I wasn't a good enough programmer to know I needed to do that. Hadn't looked at enough of my own code, incomprehensible after 5 years of "rest", to know it was important to do so.
So Jon: educate that Ruby programmer of yours. Someday he'll thank you for it. All your maintenance guys will thank you for it, and you won't have to wait for that.
Wednesday, September 9, 2009
architecture by accident
- architecture is deteriorating with time, and was never strong to begin with
- there is no architect or designer with project-wide authority
- nobody views this as a problem–including the development team
- time to add a feature is increasing
- quality issues continually surface as new features affect older ones
The introduction to Booch/Rumbaugh/Jacobson "The Unified Modeling Language User Guide" makes the case for design modeling thusly:
"If you want to build a dog house, you can pretty much start with a pile of lumber, some nails, and a few basic tools...
"If you want to build a house for your family, you can start with a pile of lumber, some nails, and a few basic tools, but it's going to take you a lot longer, and your family will certainly be more demanding than the dog. ...
"If you want to build a high-rise office building, it would be infinitely stupid for you to start with a big pile of lumber, some nails, and a few basic tools. ... You will want to do extensive planning, because the cost of failure is high. ...
"Curiously, a lot of software development organizations start out by wanting to build high rises but approach the problem as if they were knocking out a dog house."
(The full text is entertaining; go forth and read the whole thing.)
As a developer, there are a couple of approaches I use to at least try to address this:
- When adding new features myself, try to talk with everyone with code touching mine. Often there is a conceptual design in each area's designer's mind--there just is no globally-available expression of it.
- I document my own designs with diagrams and written discussion, which I pass around for comment. I try to include global information about the global concept objects I touch. This has the effect of (often) flushing out different views of the responsibilities and limitations of key objects.
- I transfer design information into javadoc, so the next developer can see it as (s)he codes.
Design is key to success in anything but the smallest projects. I use "success" very carefully here; some more recommended reading:
http://thedailywtf.com/Articles/What_Could_Possibly_Be_Worse_Than_Failure_0×3f_.aspx
"When an enterprise software application is deployed to production, it’s almost universally considered to be a success. In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster — and an outright liability.
"Yet, had the VCF system gone live, almost everyone involved — from the business analysts to the programmers to the project managers — would have called their work a success. Even after the code devolved into an unsustainable mess, many would still refuse to admit that they had failed. The result is a cycle of failure, as dev shops refuse to recognize and correct issues that plague their projects."
Wednesday, July 15, 2009
FlexMonkey: test automation for Adobe Flex
A lot of what we do at Gorilla Logic is Flex atop Java. As a result of that, we had to find a way to regression test Flex user interfaces; it's not really practical for an agile team to survive without automated regression testing. Enter FlexMonkey. FlexMonkey was developed and offered to the community in open source as THE way to do regression testing for Flex, and having seen it in action, I have to say I'm really impressed. It's easy to get started with, easy to use, and easy to fully automate. The documentation is enterprise-grade, and best of all, it's free. It took me about half an hour to get it running and automated under ant, and I've never coded in Flex.
So go check it out: FlexMonkey 1.0 released in Google Code (open source) yesterday. http://flexmonkey.gorillalogic.com/gl/stuff.flexmonkey.html.
Here's a bit more about Flex and what we do at Gorilla Logic: http://www.gorillalogic.com/what.development.services.flex.html.
Sunday, July 12, 2009
Conversations With QA Using FitNesse
public Date date=null; // receives the date from the FitNesse web page
// as each tax year is calculated, its associated date is stored here,
// so we can refer to the list in subsequent methods.
private List
public String taxYear() {
dates.add(date); // for other calculations, not this one.
Format formatter = new SimpleDateFormat("yyyy");
Calendar cal = new GregorianCalendar();
cal.setTime(date);
int month = cal.get(Calendar.MONTH);
// If the date is after June 30 use the given year, otherwise year - 1
if(month > Calendar.JUNE)
return formatter.format(cal.get(Calendar.YEAR) + 1);
else
return formatter.format(cal.get(Calendar.YEAR));
}
public String fromTaxYearStart() {
if (date==null) return "";
Date start = getFirstDayofTaxYear(date);
int delta = daysBetween(start, date);
int total = daysBetween(start, getLastDayofTaxYear(date));
return delta+" (/"+total+" = "+
(new BigDecimal(delta).divide(
new BigDecimal(total)).toPlainString()+")";
}
public String fromPreviousInclusive()
{
if (dates.size()==1 || date==null)
return null;
return daysBetween(dates.size()-2, date);
}
public String fromFirstInclusive()
{
if (dates.size()==0)
return null;
return daysBetween(0,date);
}
With these tools, it is now possible to have a conversation with QA in which we discuss the input values, rather than the validity of the calculations. It remains to be seen how effective they'll be, but I have hopes.
Wednesday, June 24, 2009
Pruning
Sunday, June 21, 2009
FitNesse for a Particular Purpose
Wednesday, June 17, 2009
Measuring the Unmeasurable
Sunday, June 14, 2009
What Needs Doing
Wednesday, May 6, 2009
Quality Developers
Sunday, April 19, 2009
Library Services
Wednesday, April 15, 2009
Managing the books
Friday, March 13, 2009
And now for something completely different
By gently pulling them apart, each battery came off with one or two bits of joining strip; gently pulling THAT off with a pair of pliers gave me 6 batteries and some scrap.
Sunday, March 1, 2009
What Executes?
Wednesday, February 25, 2009
Why Models Shouldn’t Be Programs
Sunday, February 22, 2009
More Exec Design tools
DataXtend
http://www.progress.com/dataxtend/index.ssp
I had the opportunity to run a 1-week in-house live test of DataXtend. We took a bit of design from a set of web services we had delivered using Apache Axis and a lot of back-end Java code, and gave a design spec and the original interface specs to the DataXtend team. They showed up in our office on Monday and we did a little work on reviewing their capabilities. On Tuesday, we started working on the project, which had taken our offshore development team 6 weeks to develop (admittedly, that's way longer than it should have taken, partly due to continuous changes in the requirements, and partly due to just poor development management). By late Tuesday afternoon, we had a working web service. I load tested it; I threw bad input data at it, and I deployed it in several different environments, and it just worked.
We ended the 1-week demo in 3 days.
This tool is a slick as it gets. You import your interface design, expressed in XMI or as a schema or whatever (there are lots of options), you map it to your data sources, again specified as XMI or schema or whatever, and push the button. You get a working web service in a nicely packaged deployable file, with very little, if any, coding.
This isn't executable design--but it leverages the design very directly and very easily. You simply import the interface, and import the source services and/or data, and map one to the other. It's state-based implementation at its slickest. It's expensive, but it's a whole lot less expensive than a fleet of Java coders. We figured we could replace 6 developers and 2 leads with 1 lead and this tool. Nice.
http://www.mqsystems.com/MQS-Solutions.html#mqarchitect
I haven't actually had a chance to use this tool, because I no longer have access to a large MQ installation. I have, however, been tasked with architecture in a large, fast MQ environment, and boy, oh boy, do I wish I'd had a chance to play with it. I did download the demo and tried it, and as far as I can tell without actually pushing 100k transactions/hour through it, it really works.
The basic idea is that you design your queue environment using some templates in Visio, and then you generate your whole system from that drawing. MQ is a pretty simple tool at the core, but there is a lot of connection and configuration data, and you have to get it right, or your nice fault-tolerant system fails badly, and worse, quietly. Fortunately for me, I had a couple of MQ wizards configuring my back end for me, but if you don't (and most of us don't), this may well be the right tool for a designer trying to keep track of a complex MQ system. Check it out.
If you have a tool I should look at, please drop me a line. I'm jdandrews on Twitter, and you can email me at jerry.andrews at gorillalogic dot com.
Wednesday, February 18, 2009
Abstraction
The problem with abstraction is that, without at least one working example, it's hard to tell if the abstraction makes sense. (Two good friends of mine are fond of saying "all abstraction is good, as long as it's your abstraction.) In new development, then, the ideas behind an executable design clearly have a place; they're a way to validate the abstraction actually does what it says it does.
As Bill Glover pointed out in his comment on an earlier post, applications get complex and crufty with time. He asks (with regard to reverse-engineering an existing implementation): "What will keep the design from being obscured by the kind of detail that starts showing up then? ... An example would be all of the methods and attributes that developers add to a class that aren't really relevant to the high level design, but are needed to make the class really do it's job."
What Bill's talking about, I think, is the very real and common case where you're trying to bring order to an existing application by describing its existing design. If, for example, you have a tool which can execute class diagrams (e.g. the GXE), how do you use that serve as an "armature" for the rest of the implementation?
If you just want to verify that the design and the implementation match, you're not in executable design space--you're in static code analysis space. There are tools for that (e.g. http://www.ndepend.com/). If you want an executable design to be an armature for the application to be built on, then you're really refactoring. The trick here is to allow the tool to replace some part of the application's existing functionality without a complete rewrite. Here are two example behaviors I think a tool might implement which would allow them to operate in an existing application, replacing part of that application.
I once introduced a state machine into an application which had significant business logic embedded in its screen navigation subsystem. I rewrote the navigation for just one screen, pulling out the business logic into separate classes for each business rule, and describing the screen-to-screen navigation in a state diagram. The state machine responded to user events (e.g. button clicks), queried the new business logic classes to get guard results, and handled transitions within and off of the one screen. Clearly, this approach can be extended to other screens without disrupting the application as a whole (and in fact, the development team for that product is doing just that).
I'm currently working on a project which is using an executable design tool (the Gorilla Execution Engine) to validate requirements. Classes representing domain concepts and relationships are executed in the GXE to see if the very complex calculations modeled by those classes give the "right results". In my ideal world, we'd then take that domain model, specialize it (that is, turn it into a design model), and it would enforce class behavior in the finished product. I might reverse engineer a dozen classes which participate in a given calculation, remove or hide the methods which aren't germaine to that calculation, then add new methods from the domain model to flesh things out. I might remove irrelevant methods from the model entirely. The execution tool would provide a class loader which would:
- load the design model,
- upon a class load request, it would search the model for a definition of the requested class,
- look for any class implementations of the same class,
- compare the two, merging them if the the implementation does not contradict the design model, and complaining loudly if it is not (this step could actually be done at build time rather than runtime), then
- load the merged class for execution.
An approach like this would allow the designer and developer (sometimes the same person, right?) to work together to decide what portions of the existing application could be replaced with an executing design. This would address a problem I see a lot, where the code can't actually be traced back to any requirements or higher-level design. By forcing the code to match the design in order for it to run, the design gets updated because it's the best way to get the build to work.
Of course, developers could always decide they don't want to execute the design, but they always have that power. I'm assuming that designers and developers are working together to build something; if they're not, the organization has issues no tool can address.
Sunday, February 15, 2009
The Role of the Executable Design
Wednesday, February 11, 2009
Disclaimer, and Some Links
- Developers can extend the model in a natural way. For example, if the design specifies a method on a class, the developer just creates the appropriate class file and codes the method. The execution engine melds the class in the model and the class as coded according to appropriate rules.
- The model constrains the design.
- The engine supports a widely-understood modeling language.
- Code generation is an invisible part of using the engine, if it must be done at all.
- The execution engine should be "invisible"--that is, it shouldn't take a whole lot of scaffolding in every modeled object to use it. Both the GXE and most state machines I've used are great examples of this; you start up the execution engine, and it "just runs" your model.
Sunday, February 8, 2009
Executable Design
When I first “noticed” architecture, I found it pretty much irrelevant. At that time, designers generally talked rather than drew, or they drew flowcharts. Design documents were either 1000-page tomes or they didn’t exist at all. The architect with whom I worked generated block diagrams at such a high level they weren’t useful to me at all, or worse, they were misleading, because the actual system didn’t match the architecture in anything but name.
At some point, I finally started saying “architecture must be executable.” At the time, I meant that a designer/developer should be able to take an architectural description, code to it, and though it would be a skeleton, it should execute. It should be possible to mock all the key system functions using nothing but the architectural description.
If feel the same way about design: if you can’t use the design as an actual blueprint, it isn’t a design–it’s a concept. Concepts are good, but they must be refined into architecture and designs. That’s what designers and architects get paid to do. It’s hard work. It’s boring sometimes. And it’s very very difficult to do well, which is why there are so few good designers, and even fewer architects.
I'm trained as a nuclear engineer, and I worked in power plants as such for 15 years before moving to software development full-time. Power engineers would laugh at what usually passes for a software design. When you design a building, you aren’t done until you can predict the construction cost and schedule to within 5%. Same thing goes for an engine, or a CD player, or chemical dye. In other industries, you’re working with real materials, which cost a lot, and whole teams of people, who also cost a lot. You can’t do most of the work yourself, so you must explain exactly what you want done.
The same thing is true of large software projects (agile ones included--I'll get to that in future postings).
As software developers, designers, and architects, we have a huge advantage over engineers: our cost of implementation is very low. As a result, sometimes it really is cheaper just to throw code at the wall and see what sticks. In medium and large projects, though, it is seldom the case that this approach will work more than occasionally. As a result, we build design documents. Designers and architects make UML descriptions (hey! we have our own design language!) and associated text, which we then hand to developers to turn into code.
Which is incredibly wasteful (and now I’m getting to the point).
UML is a succinct, precise way to specify a design. Why do we have to translate it into code to get it to execute? Wouldn’t it be a great test of a design if you could run it before you lobbed it at developers to flesh out? I’m not talking about code generation here; I’m talking about directly executing the design components to see how they work together.
The model-driven architecture (MDA) folks are running down that idea. I’m not sure I like where they’re going with it, though. I do not think it’s a good idea to turn designs into complete applications. Designs are abstractions, and the whole point of that abstraction, and the ability to separate architecture, design, and development, is to allow people to think at different levels of detail, so they have the capacity to grok the whole application. Divide and conquer. I do think, though, that recoding the design from UML into some other language is wasteful. Wouldn’t it be a lot more productive to give developers an executing design and let them flesh out the details in their language of choice?
The rise of virtual machines–Java and .NET specifically, but I’m sure there are others–makes me think we can build executable designs, and code to them rather than recoding them. I’m currently working on “how”. I’d like to know what you think of the idea.