My personal favorite, and an instructive example, is a nine-month, 6-man replacement project for the public client of an n-tier, mission critical, customer-facing application for a very large public cash-squeezed company. I was architect for this application for several years, and I made a number of specific recommendations for it as I was leaving, including one that the client be refactored so that business logic present in the client (for good architectural reasons) was separated from the display logic. We delivered the first piece of that refactor--a display controller separate from the display code itself--and recommended that it be used for all screens, not just the ones for which we piloted it. For no reason I could understand, the architect who replaced me decided to completely replace the client--including hundreds of working screens, huge chunks of interface logic, state management code, encryption and masking code for credit-card processing, terminal interface code, etc. Knowing his management team, I'm sure the work was justified in bafflegab. The project is just wrapping up, and a cost and schedule of just 5 and 3 times (respectively) the original refactoring estimate. What problem were they trying to solve that could justify this kind of expense?
The original refactoring was proposed to address the fact that the client had become very rigid and spaghetti-bound. It was hard to understand and harder to modify, so that even small changes (and there were many of those in the queue) required weeks to make and test. All the senior developers understood that the source of the problem wasn't the screen display code, or the hardware or server interfaces--it was the way business logic was in-lined into the screen transition code. Once that was moved out into business logic classes and properly tied into a separate screen controller, most of the problems in the client would be resolved. Such a project was difficult but was nicely bounded; we estimated a 2-man team consisting of our 2 best and most experienced developers working for 3 months could complete it. Instead it took a team of 5 working for 9 months to rewrite the entire client. Now they have a nice new front-end written in Adobe Flex--and since all the Flex coding was subcontracted, nobody on the development team can maintain the client UI. I will make a prediction: the new client will be more expensive to maintain than the old one.
I hope I'm wrong. Fortunately, I don't work there anymore.
I see this kind of thing all the time. On my current project, a relatively junior engineer was given free reign for 3 months to refactor a chunk of functionality which runs once a year to reduce the running time from 18 hours to 3 hours. The source of the requirement was the engineer in question--not the users, not the management team. Now the code is barely readable--but it sure is fast. Of course, to make up the 3 months of developer time spent, the annual process has to run about 30 times--a process which will take between 15 and 30 years. Any changes to the process will require man-weeks of developer time to make instead of man-days. In short, there's almost no way to justify that kind of expenditure.
So: how are these decisions made? Why do managers allow them? What could possibly justify this kind of wasted effort? And finally: why do otherwise good designers and developers allow them to happen?
No comments:
Post a Comment