At the recent jQuery conference in Oxford there were a number of great presentations. One that particularly caught my imagination was by Doug Neiner who made some really interesting points about the trade-off between performance and maintenance. His presentation included some great one liners, one of which was “Write Code Like You Buy A Car – Always weigh the difference between cost and quality“. For cost you can also read performance. Whilst we should always endeavour to produce code that performs well, it’s not the only factor to be considered. I had the opportunity to chat with Doug in the bar after the event and I was pleased to say that our approach seemed to be broadly similar. It also reminded me of a couple of very different performance ‘issues’ that I had experienced.
Too Quick On This Occasion
I had the pleasure of working with a very experienced developer some few years ago, but he was no infallible (who is). We worked on one particular project with a VB6 front-end (I said that it was some few years ago) and Oracle back-end. Rather late in the project it was realised that the requirements for management reports had not been properly defined, and when they were defined it become apparent that our database structure did not lend itself to the production of the necessary outputs.
Instead of altering the database schema at a late stage in the project (our client had some very bureaucratic change control processes), my colleague developed a suite of awesome PL/SQL procedures that manipulated all of the data in memory using dynamic table arrays. The procedures were really fast, but the trouble was they ran at the end of the overnight batch process when performance was really not critical, and nobody but him could understand what they were doing. When the batch finished all of the data that had been manipulated and passed from procedure to procedure vanished. If any problems arose or questions were asked about the accuracy of the reports there was no way that anyone could look at the data that had been used during the process.
Some while after the development team was disbanded and the application had been supported by another team, the management reports were re-written to use a series of permanent tables that were emptied at the start of the process and could therefore be inspected at the end of the process should the need arise.
Nowhere Near Quick Enough
I was also reminded of what I hope is the worst performing code that I have ever written. If this isn’t the worst then there must be something really horrible lurking out there somewhere.
I developed some code, again in VB6, but this time using OLE Automation to manipulate Microsoft Word documents. One of my routines scanned a document, searching for each occurrence of a specific piece of text and replace it with another piece of text. It worked just fine. In my (too limited) testing I found no problems.
The issue arose when one of our first customers tried our application with a much bigger document. It took ages, about 20 minutes instead of 30 seconds.
When I looked again at my code the reason was obvious. What had I been thinking. I had written a loop in VB that did separate Find and Replace operations, each one incurring the overhead of transferring control from VB to Word via OLE. Word had a perfectly suitable Find & Replace All option so why hadn’t I used it. With that change made we were back to 30 seconds to process the document instead of 20 minutes. Clearly not my finest hour.
So performance does matter, but it is almost always a trade-off. Don’t fall into the trap of trying to achieve the ultimate possible performance with every line of code you write. Consider also how easy it will be to maintain the code, and how much time you have available to squeeze out the last possible drop of performance.
Write Code Like You Buy A Car – Always weigh the difference between cost, quality and performance.