May 14, 2009

Quality assurance (QA) and testing's role in requirements

The typical late involvement of quality assurance (QA)/testing in software projects certainly limits their effectiveness. By the time many QA pros and testers get involved, all the development defects are already in the code, which makes detecting them much harder and fixing them much more costly. Moreover, late involvement limits how many of those errors are actually found, due to both lack of time to test and QA/testing's lack of knowledge about what to test.

Thus, virtually every QA/testing authority encourages the widely accepted conventional wisdom that QA/testing should get involved early in the life cycle, participating in the requirements process and especially in requirements reviews. The reasoning goes that by getting themselves admitted during requirements definition, QA/testing can point out untestable requirements and learn about the requirements early enough to enable developing more thorough tests by the time the code arrives. Such well-intentioned and seemingly obvious advice unfortunately has a number of hidden pitfalls that in fact can further reduce QA/testing's effectiveness.

Participating in defining requirements

In many organizations, QA/testing receives inadequate information upon which to base tests. Inadequacy comes from several directions. The documentation of requirements and/or design frequently is skimpy at best and often is incomplete, unclear and even wrong. Furthermore, QA/testing often receives the documentation so late that there's not time to create and run sufficient suitable tests.

QA/testing absolutely has a need to learn earlier and more fully what the requirements and design are. However, involving QA/testing in requirements definition and/or review raises several critical issues.

Quite simply, defining requirements is not QA/testing's job. Many organizations have some role(s) responsible for defining requirements, such as business analysts. For some, the business unit with the need is charged with defining their requirements for projects; in other organizations, the project manager or team members may end up defining requirements as part of their other project activities. QA/testing may be one of the responsibilities of these other roles, but I've not heard of any competent organization that makes such other tasks part of the QA/testing role.

Consequently, if QA/testing staff members are assigned to participate in defining requirements, they're probably going to be in addition to (rather than instead of) those regularly performing the task. Not many places are likely to double their definition costs, especially not for something which offers no apparent benefits and may even be detrimental to the definition itself.

While some folks in QA/testing may have some knowledge of the business, one cannot assume they will, and many of those in QA/testing may lack relevant business knowledge. Moreover, it's hard enough to find business analysts with good requirements definition skills, and they're supposed to be specialists in defining requirements. There's no reason to expect that QA/testing people, for whom requirements definition is not a typical job function, would have had any occasion to develop adequate requirements definition skills. Piling on the costs by including QA/testing people in requirements definition would be unlikely to help and could just get in the way.

Participation in reviewing requirements

On the other hand, it's much more logical to include QA/testing specialists in requirements reviews. After all, reviews are a form of QA/testing. In fact, some organizations distinguish QA from testing by saying QA performs static testing, primarily reviewing documents, whereas testing (or quality control, QC) executes dynamic tests of products.

Organizations with such a distinction frequently make QA responsible for reviewing requirements, designs and other documents. It's not these organizations, but rather all the others in which QA/testing is clamoring for admission to requirements reviews.

In organizations where requirements reviews are run by someone other than QA, such as the business units/users or management, there may be resistance to allowing QA/testing to join reviews. An obvious reason would be that limited review budgets may not allow for the added costs of QA/testing staff's time attending reviews.

Of course, budgeted expenses could be shifted from later project activities that presumably would require less effort due to QA/testing's participation in reviews. Nonetheless, such seemingly logical budget shifts often are not made, especially when the future savings go to a different part of the organization from that charged for reviews.

However, the bigger but often less apparent obstacle is a surprisingly (to QA/testing) common perception that adding QA/testing to reviews not only may provide no significant positive value but could actually have a negative impact on review efficiency and effectiveness. In such cases, the already stressed rest of the organization is unlikely to go out of their way just to help QA/testing meet its needs. Such rejection often is couched in terms of limited budget, but it may be based on not really wanting QA/testing involved.

The "testability trap"

Why would people feel that QA/testing actually impedes reviews? I call it the "testability trap." In the QA/testing industry, widely held conventional wisdom is that lack of testability is the main issue with requirements. Generally, lack of clarity makes a requirement untestable. An unclear/untestable requirement is likely to be implemented incorrectly, and regardless, without being able to test it, QA/testing has no way to detect whether the requirement was implemented correctly.

Consequently, it's common for comments of QA/testing folks who have been let into requirements reviews to focus almost entirely on the various places in the requirements they feel lack testability. The less they know about the business domain, the more they are stuck speaking only about lack of testability.

While testability is indeed important, frequently it mainly matters to QA/testing, and their repeated review comments about lack of testability can seem like so much annoying noise to the rest of the review participants. In such instances, the presence of QA/testing can be perceived as simply getting in the way of the review, tying up participants with trivial nitpicking. At best, QA/testing may be ignored; sometimes they even get "disinvited" from participating in further requirements reviews.

Be prepared to contribute

The key to not wearing out one's welcome is contributing productively to the review in ways that all participants recognize as valuable. That takes familiarity with the subject area content and with more effective review techniques.

QA/testing people are not only unlikely to have requirements definition skills, they also often have little familiarity with the business domain subject area that is the topic of the requirements. The difficulty can be especially acute for QA/testing groups charged with performing requirements reviews. Since they'll probably be dealing with a wide variety of business areas, chances are lower that they'll know much about any of the many areas.

Requirements are all about content. To contribute effectively to reviews, it's incumbent upon QA/testing to learn about the relevant business content before reviewing related requirements. Because few organizations recognize the need for such preparation, time and budget are seldom provided for it. Therefore, the burden will be on QA/testing to make the time, quite possibly on their own time, in order to enable them to comprehend content sufficiently to contribute productively to the reviews.

Review technique effectiveness

Understanding content is necessary but not sufficient. Most reviews are far weaker than recognized, largely because the reviewers don't really know what to do, how to do it, or how to tell whether they've done it well. Group reviews generally enhance findings because multiple heads are better than one, but they still find far fewer issues than they could or that participants presume they've found.

With the best of intentions, typical reviewers look at the requirements and spot in a somewhat haphazard manner whatever issues happen to occur to them. Even though they may be very familiar with the subject area, it's easy for them to miss even glaring errors. In fact, their very familiarity sometimes causes them to miss issues by taking things for granted, where their minds may fill in gaps unconsciously or fail to recognize something that wouldn't be understood adequately by someone with less subject expertise.

Moreover, it's hard for someone to view objectively what they're accustomed to and often have been trained in and rewarded for. QA/testing emphasizes the importance of independent reviewers/testers because people are unlikely to find their own mistakes. Yet, surely the most common reviewers of requirements are the key business stakeholders who were the primary sources of the requirements.

In addition, it's common for typical reviewers to provide insufficient feedback to the requirements writers. For example, often the comments are largely not much more than, "These requirements need work. Do them over. Do them better." The author probably did as well as they could, and such nonspecific feedback doesn't give the author enough information about what, why or how to do differently.

Formal requirements reviews

Many authorities on review techniques advise formalizing the reviews to increase their effectiveness. Formal reviews are performed by a group and typically follow specific procedural guidelines, such as making sure reviewers are selected based on their ability to participate productively and are prepared so they can spend their one- to two-hour review time finding problems rather than trying to figure out what the document under review is about.

Formal reviews usually have assigned roles, including a moderator who is like the project manager for the review, a recorder/scribe to assure review findings are captured and communicated, and a reader/leader other than the author who physically guides reviewers through the material. The leader often is a trained facilitator charged with assuring all reviewers participate actively. Many formal reviews keep key measurements, such as length of preparation time, review rate and number of issues found. Detailed issues are reported back to the author to correct, and a summary report is issued to management.

Some formal reviews have the reviewers independently review the materials and then report back their findings in the group review session. Often each reviewer reviews a separate portion of the material or looks at the material from a specific perspective different from each of the other reviewers' views. Other formal reviews work together as a group through the materials, frequently walking through the materials' logic flow, which typically covers less material but may be more effective at detecting problems.

Proponents of some prominent review methodologies essentially rely solely on such procedures to enable knowledgeable reviewers to detect defects. However, I've found that it's also, and probably more, important to provide content guidance on what to look for, not just how to look at the material under review and assuring reviewers are engaged.

For example, in my experience, typical reviews tend to miss a lot more than recognized for the reasons above and because they use only a few review perspectives, such as checking for clarity/testability, correctness and completeness. Often, such approaches find only format errors, and then sometimes only the most blatant, while missing content issues.

In contrast, I help my clients and seminar participants learn to use more than 21 techniques to review requirements and more than 15 ways to review designs. Each different angle reveals issues the other methods may miss. The more perspectives that are used, the more defects the review detects. Moreover, many of these are more powerful special methods that also can spot wrong and overlooked requirements and design content errors that typical weaker review approaches fail to identify.

When QA/testing truly contributes to reviews by revealing the more important requirements content errors, and not just lack of testability format issues, business stakeholders can appreciate it. When the business stakeholders recognize QA/testing's review involvement as valuable to them, they're more likely not only to allow participation, but advocate it.

No comments:

Post a Comment