To me, Duffy's primary point is to avoid leaving performance analysis until the end, but rather always care about your code, and always consider the performance implications of what you do, otherwise you fall into sloppy habits:
These kinds of "peanut butter" problems add up in a hard to identify way. Your performance profiler may not obviously point out the effect of such a bad choice so that it’s staring you in your face. Rather than making one routine 1000% slower, you may have made your entire program 3% slower. Make enough of these sorts of decisions, and you will have dug yourself a hole deep enough to take a considerable percentage of the original development time just digging out. I don’t know about you, but I prefer to clean my house incrementally and regularly rather than letting garbage pile up to the ceilings, with flies buzzing around, before taking out the trash. Simply put, all great software programmers I know are proactive in writing clear, clean, and smart code. They pour love into the code they write.
I think he's absolutely right. I've worked with large bodies of code where people cared about each line of code, as it was written. And I've worked with large bodies of code where people (very good programmers, mind you) didn't pay attention to the code as it was written, didn't review each line, didn't gather together for design discussions, didn't engage in those back-and-forth question-and-answer sessions that may seem so incredibly frustrating, but are vital to building a piece of software that matters. And when they didn't do those things, the software turned out not to matter.
Maybe they thought they were too good to need these activities; maybe they thought they were a waste of time; maybe they thought they could defer them until later; maybe it was just an unpleasant activity and nobody cared enough about it to make it happen.
So if you want your software to matter, care about it: "pour love into the code you write." It's a great sentiment!
But what really struck me about Duffy's post was his description of this code review experience:
It’s an all-too-common occurrence. I’ll give code review feedback, asking "Why didn’t you take approach B? It seems to be just as clear, and yet obviously has superior performance." Again, this is in a circumstance where I believe the difference matters, given the order of magnitude that matters for the code in question. And I’ll get a response, "Premature optimization is the root of all evil." At which point I want to smack this certain someone upside the head, because it’s such a lame answer.
There's a lot wrong here that is non-technical. Both sides of this event are at fault, and this situation isn't going to get resolved in a meeting. There is a management problem here, and not an easy one to solve, either.
I've been in those situations, too, and I don't have a simple prescription. I'm sure you've been in them as well; if not, count yourself lucky.
No comments:
Post a Comment