In general, I like this book a lot. It's a nice blend of the tactical and the strategic, of the pragmatic and the theoretic, and it covers a lot of ground in a very readable fashion. It's hard to imagine anybody who is seriously interested in software testing who wouldn't find something that interested them in the book.
To give you a very high-level overview of the book, here's the "Contents at a Glance":
- About Microsoft
- Software Engineering at Microsoft
- Software Test Engineers at Microsoft
- Engineering Life Cycles
- About Testing
- A Practical Approach to Test Case Design
- Functional Testing Techniques
- Structural Testing Techniques
- Analyzing Risk with Code Complexity
- Model-Based Testing
- Test Tools and Systems
- Managing Bugs and Test Cases
- Test Automation
- Non-Functional Testing
- Other Tools
- Customer Feedback Systems
- Testing Software Plus Services
- About the Future
- Solving Tomorrow's Problems Today
- Building the Future
Now let's take a deeper look at a few of the areas the book covers, in more detail.
Not "those squeegee guys that wash your windows"
The section Software Test Engineers at Microsoft describes the organizational approach that Microsoft takes to software testing. I think that Microsoft doesn't get enough credit in areas such as these. Although there are other high-tech companies that are much larger than Microsoft (e.g., IBM, HP, Cisco, Intel) Microsoft is different from these other companies because they are purely a software company (well, OK, they have a very small hardware organization, but it's nothing like the others in that list). I think Microsoft has, far and away, the most sophisticated understanding of how to do very-large-scale software engineering, and I also think that they have one of the most sophisticated approaches to software testing. At a previous job, some of my co-workers did contract software engineering for Microsoft in their test organization, and it was very interesting to get a peek behind the curtain at how Microsoft works.
The book discusses some of the major tasks and activities that a Microsoft SDET (Software Development Engineer in Test) gets involved with:
- Develop test harness for test execution
- Develop specialty tools for security or performance testing
- Automate API or protocol tests
- Participate in bug bashes
- Find, debug, file, and regress bugs
- Participate in design reviews
- Participate in code reviews
This is a challenging role, and it's pleasing to see Microsoft giving it the respect it deserves.
"The happy path should always pass"
The section A Practical Approach to Test Case Design is one of the strongest in the book, and is just jam-packed with useful, hard-won advice for the practical tester. It contains information on:
- Testing patterns
- Test estimation
- Incorporating testing earlier in the development cycle
- Testing strategies
- Testability
- Test specifications
- Positive and negative testing
- Test case design
- Exploratory testing
- Pair testing
It's not an exaggeration to suggest that a professional tester might find it worth getting this book for this section alone, and might well find himself (or herself) returning to re-read this section every year or two just to re-focus and re-center yourself around this level-headed, thorough approach. I particularly enjoy the section's pragmatic assessment:
There isn't a right way or a wrong way to test, and there are certainly no silver bullet techniques that will guarantee great testing. It is critical to take time to understand the component, feature, or application, and design tests based on that understanding drawn from a wide variety of techniques. A strategy of using a variety of test design efforts is much more likely to succeed than is an approach that favors only a few techniques.
"The USB cart of death"
The two sections Non-Functional Testing and Other tools are also, in my opinion, particularly strong sections, perhaps surprisingly so since they don't at first glance look like they should be as informative as they actually are.
Non-Functional Testing talks about a collection of "ilities" that are challenging to test, and that are often under-tested, particularly since it is hard to test them until relatively late in the process:
Areas defined as non-functional include performance, load, security, reliability, and many others. Non-functional tests are sometimes referred to as behavioral tests or quality tests. A characteristic of non-functional attributes is that direct measurement is generally not possible. Instead, these attributes are gauged by indirect measures such as failure rates to measure reliability or cyclomatic complexity and design review metrics to assess testability.
Here we find a variety of very useful sub-sections, including "How Do You Measure Performance?", "Distributed Stress Architecture", "Eating Our Dogfood", "Testing for Accessibility", and "Security Testing". Sensibly, many of these sections give some critical principles, philosophies, and techniques, and then are filled with references to additional resources for further exploration. For example, the "Security Testing" section mentions four other entire books specifically on the subject of software security:
- Hunting Security Bugs
- The How to Break... series
- Writing Secure Code
- Threat Modeling
These are quite reasonable suggestions, though I wish they'd included suggestions to read Ross Anderson's Security Engineering or some of Bruce Schneier's work.
Other Tools is unexpectedly valuable, given such a worthless section title. This section makes three basic points:
- Use your build lab and your continuous integration systems
- Use your source code management (SCM) system
- Use your available dynamic and static analysis tools
Of course, at my day job we're proud to provide what we think is the best SCM system on the planet, so the topics in this section are close to my heart, but even when I wasn't working for an SCM provider I thought that techniques such as the ones provided in this section are incredibly useful. I've spent years building and using tools to mine information from build tools and CI systems, and I've found great value in static analysis tools like FindBugs for Java; at my day job we're big fans of Valgrind. Lastly, I like the fact that this section observes that "Test Code is Product Code", and so you need to pay attention to design, implementation, and maintenance of your tests, just as you pay attention to these same topics for your product code.
"Two Faces of Review"
Oddly buried in the About the Future section at the end of the book is an all-too-short section on code reviews. The section contains some suggestions about how to organize and structure your code reviews, how to divide and allocate reviewing responsibilities among a team, and how to track and observe the results of your code review efforts so that you can continue to improve them. And I particularly liked this observation:
For most people, the primary benefit of review is detecting bugs early. Reviews are, in fact, quite good at this, but they provide another benefit to any team that takes them seriously. Reviews are a fantastic teaching tool for everyone on the team. Developers and testers alike can use the review process to learn about techniques for improving code quality, better design skills, and writing more maintainable code. Conducting code reviews on a regular basis provides an opportunity for everyone involved to learn about diverse and potentially superior methods of coding.
Here, I have to take a brief aside, to relate a short story from work. Recently, my office conducted a two-day internal technical conference. The software development staff all went offsite, and we got together and discussed a variety of topics such as: agile programming, domain-specific languages, cloud computing, and other trendy stuff. But one presentation in particular was perhaps unexpected: our President, Christopher Seiwald, reprised a presentation he's given many-a-time before: The Seven Pillars of Pretty Code. If you've never seen the presentation, give it a try: it's quite interesting. But the important part, I think, is that the company felt strongly enough about the importance of code, and of code review, to get everybody, including the company president, together to spend an hour discussing and debating what makes great code, how to write great code, and so forth.
Is How We Test Software At Microsoft a great book? No, it's not. It's too long, and the presentation style bounces around a lot (not surprising for a book with many different authors), and the book is a bit too encyclopedic. I wish that the authors had covered fewer topics, perhaps only one-half to two-thirds of the overall topics, but had covered them in more detail. And the book's unapologetically-Microsoft-only approach can be frustrating for people outside of Microsoft who are interested in how to apply these techniques in other situations: what if you're trying to test low-memory handling on a Linux system, rather than Windows? what should you be thinking about when looking at character set issues in Java? etc. And I think that the book gives rather too much credit to some approaches that I'm not particularly fond of, such as code-complexity measurement tools, model-based test generators, and the love-hate relationship I have with code coverage tools.
But those are fairly minor complaints. If you are a professional tester, or a professional software developer, or even an amateur software developer, you'll find that this book has a lot of ideas, a lot of resources, and a lot of material that you can return to over and over. Don't make it the first software engineering book that you buy, but consider putting it on your personal study list, somewhere.
-MS have had a 1:1 dev:test ratio, which is good, but like developers to think about testing too. Indeed, I'd like it if standards bodies thought more about how they'd test things.
ReplyDeleteWhere OSS is strong is we get really good alpha/beta testing: lots of complex hardware and network configs out there. Where it's weak is in its rigorousness. We also have a problem with datacentre-scale apps is that few people have idle datacentres for running nightly builds and tests on.