Wednesday, October 27, 2010
Kanban in IT
Saturday, March 20, 2010
Sucking Less: Checking In More Often
I'm fairly fearless when coding, which means that about once a week, I delete a huge chunk of something I should've kept, or change something into something unrecognizable, thereby inadvertently breaking a dozen unit tests. When I discover the problem, usually about 4 hours later, I no longer have any idea what I did that made the bad thing happen. Then I spend another 2 or 3 hours figuring out what I broke and fixing it. Ugly.
On my personal projects, I check in code every time I get a unit test working. My checkins are something like 15-20 minutes apart. On projects I get paid for, though, checking in means running the whole unit test suite, and that can take 10 minutes (on a good project) or 2 hours (on a bad one)--so I don't do it very often. That's when I get into trouble. I've been meaning to solve that problem for some time, and Joel Spolsky's blog topic last Wednesday (Joel on Software) finally kicked me in the pants. It took 15 minutes to solve the problem; here's how I did it.
Wednesday, March 3, 2010
Annotating Custom Types in Hibernate
Sunday, February 21, 2010
java.util.DuctTape
The proposed class, java.util.DuctTape, is designed as a general purpose fix for a variety of commonly-observed situations in production code. It serves as a temporary patch until a permanent solution is developed and deployed.
Wednesday, February 17, 2010
Database/Code impedance mismatch
I love natural keys in database design. You have to pay attention, though: the natural impedance mismatch between a programming language representation and the database representation of the key can bite you.
Consider an object whose primary key might contain a date--say, a change log record. Oracle and DB2 both store a DATE as a time containing year, month, day, hours, minutes, and seconds. No timezone. The natural mapping for a Java tool like Hibernate is to map to a java.util.Date, which stores the Date as a time in milliseconds since the epoch GMT, and then maps it to whatever timezone is set on the machine where the code is running for display and conversion.
Now consider what might happen (especially if our change log record is attached to some parent object);
- We create and save the object; it is persisted. The local cached copy contains a non-zero value for milliseconds, but the database has truncated the milliseconds value and saved it.
- Later on in the code somewhere, we have reason to save the object again, perhaps as part of some collection operation.
- Hibernate looks in its cache, compares it with the database, and notes that the values of the Date don't match--so it tries to save the value again.
- The database dutifully tosses out the spare milliseconds, and bam! we have an attempt to re-insert an existing record, so it throws an exception.
The easy fix in this case is to declare a class which matches the database representation--in this case, a good choice would be to declare a new class which truncates the milliseconds. A modest example is shown below:
/**
* Public Domain; use or extend at will.
*/
import java.util.Date;
public class DbDate extends Date {
/** increment if you change the state model */
private static final long serialVersionUID = 1L;
/** @see java.util.Date#Date() */
public DbDate() {
long t = getTime();
setTime(t - t%1000);
}
/** @see java.util.Date#Date(long) */
public DbDate(long t) {
super(t - t%1000);
}
/** @see java.util.Date#setTime(long) */
@Override
public void setTime(long time) {
super.setTime(time - time%1000);
}
}
Also note that if you declared the database column as a TIMESTAMP, the Java and database representations more-or-less match--avoiding, in this case, this kind of problem. Note that Oracle doesn't support TIMESTAMP_WITH_TIMEZONE in a primary key, and DB2 doesn't implement TIMESTAMP_WITH_TIMEZONE at all--as of the last time I had access to DB2.
Dealing with timezones is another topic entirely--one which I'll take up in a future post.
Sunday, February 14, 2010
Limiting Irreversibility
Wednesday, February 3, 2010
SEMAT and development principles
Sunday, January 10, 2010
Coupling Design and Implementation
Yesterday, one of our analysts, at the prompting of one of our developers and two of our data designers, reviewed a completely new take on the same design change... none of the guys in question knew about the previous design change (they'd missed the design review). We ended up going with the previously-reviewed design.
The whole meeting and the thought that led up to it could have been avoided if there had been some mechanism for ensuring the approved design was implemented. This bit of design, like all design, is pretty much wasted if it remains in design documents.
One way to handle this would have been to somehow automatically compare the existing design artifacts to the implementation to see if they matched, and complain if they didn't. Even if we didn't implement the approved design right away, then, there'd be a mechanism for reminding developers that they planned to do something one way, and haven't made it happen yet.
Thursday, January 7, 2010
Three snippets of "interesting" code
try {
if (foo())
return 1;
} catch (Exception e) {
throw e;
} finally {
return 3;
}
The questions I start with are:
- what does this do if foo()==true? If foo()==false? If foo() throws an exception?
- how could you recode this more simply?
- the original spec was:
- if foo is true, return 1,
- if foo is false, return 3,
- propagate exceptions.
It's sobering to note that over 2/3 of interviewees for senior Java programmer positions fail all three questions. How would you answer them?
I firmly believe that even mid-tier developers should have no problem describing what they mean in code, and in understanding what others mean, even in code which is poorly written. There always seems to be a snarl somewhere that nobody wants to touch (I've written one or two of those myself). For the most part, though, the bad code I see is the result of "I'm not sure how to do this, but this seems to work".
Here's one such snippet:
static final BigDecimal ONE_HUNDRED_PERCENT = new BigDecimal("1.00")
.setScale(MathUtils.DOLLAR_SCALE, MathUtils.STANDARD_ROUNDING_MODE);
This is just bad code, unless you're letting BigDecimal manage all the results' scales itself (we aren't, and you shouldn't; see my previous article on BigDecimal rounding). It's completely replaceable with "BigDecimal.ONE". It leaves the resulting code less clear, and performs no useful function.
I found the following code (paraphrased) in a Java application I'm working on:
foo(vc.getNewValue(), ppCorrectedValue=vc.getNewValue());
It's perfectly correct and exactly equivalent to:
ppCorrectedValue = vc.getNewValue();
foo(vc.getNewValue(), ppCorrectedValue);
That sort of expression is common in C; most C developers are aware that the equals operator evaluates to the left-hand value of the expression. Many Java developers aren't aware of this fact--so I was surprised to see it. I'm not sure it improves the readability of the code, though, so I'm refactoring it into the second form above.
Where I think the equals operator behavior really helps in Java is when setting up initial values:
i = j = 0;
Good code is hard enough to write; have pity on the next guy, and be as straightforward and clear as possible. Arabesques like assignment inside a function call parameter list just make your code harder to read and maintain.