Friday, 29 July 2011

The Perfect Non-Function Requirement?

I would like to thank all of you who attended my session at the July 2011 Test Managers Forum. The session was a discussion group, looking at proposed model for non-functional requirement (NFR).  Based on the presentation and the “NFR Requirements Cube” (print it out and glue it together), the following is a summary of the proposal and discussion. There is a challenge at the end of this so please read on!

The driver for this model was that the common complaints about NFRs are that they are difficult to scope, understand, define etc. When I started looking at performance requirements in detail and think about what information a developer and a tester may need to know, I evolved the following model.
At this point I’ll start with the caveats about models and their uses. Models have their places, but all models do fail at some point. Not all models can be used and the context of the situation will define what you use. The following model is one example that may be of use.
The other point is the term “Non-functional”. I personally am not keen on this term, but too many organisations, architects, business analysts, developers, standards, formalised training (not just testing) certifications use the term. A separate thread would be interesting but I want to stick to the principle of the proposed model.
The model is based on a cube (a 6 sided dice if you like). Each face (facet) describes an intrinsic part of the requirements structure. The following describes each facet and what is intended to cover. It was clearly determined at the TMF session that to expect all of the facets to be covered at the start of a project would not be possible, rather the requirement would evolve during the lifecycle of the project, iterating into more detail. It was also highlighted that a group of requirements may share some of the same facets.
1.   Objective This is the high level description, ideally in layman’s terms, about what the system should achieve and why. Some organisations may have these as High Level requirements, acceptance criteria, user stories or High level test conditions. An example from an easy to relate to User Story perspective could be:
“As a <member of the public>
I want <the application to adequately cope with high loads>
So that <I can to purchase a ticket, quickly for a highly subscribed event>”
As you can see, the objective is written in plain language, i.e. no technical terms, which provide the context for the requirement, its need and how it lies within the overall approach. The caution is to prevent too much data being added here, rather any details should be added as notes or supplementary data. This data will then be used to scope out the remaining five facets.
2.   Scenario This describes the details of the requirement, such as what time of day, the numbers of users, the types of users, the combinations of actions etc. The typical scenarios that a web based application relate to the time of day (e.g 9  am when 70% of the staff team all need to login), the different roles of users that will be accessing, what back end transactions/batch jobs will be happening etc. This provides the identification of all the actions that are being undertaken by the system under test.
3.   Profile The profile describes the timelines associated with the scenario. From a performance perspective this could be in form of: I want to load 10,000 concurrent users onto the browser over a period of 1 hour, maintain that load for 1 hour and then ramp down to 0 users in 1.5 hours.

4.   Environment Describe what the expectations are for the environment/infrastructure. These include elements around: data, servers, monitoring, load balancer configurations, network settings etc. Other important elements include caching mechanisms. Worth considering are also the network/infrastructure management systems that are under way and using the Ops team to monitor whilst the test is under way.
5.   Measurement What measurements do we require? For a NFR this can be particularly unclear, as we may want to measure some level of responsiveness in the user interface, however we are also concerned about the CPU and memory usage of critical elements of the system. There may be custom measures that a specific application or set up will need. So this facet may include statements of: CPU will not go above 50%, users responsiveness will be within 5 seconds. One caution here is to also state where the measurement will be made at, e.g. is this the response time at a physical user interface or is this the server response time.
6.   Variance The systems we tend to test are complex. There are many servers, switches, routers, operating systems, interfaces, data and users. This makes NFR testing peculiar in that there are rarely Pass/Fail criteria, rather there is an acceptable range of values. The variance facet is there to record this. For a performance related requirement, this facet could have a statement such as: “the average response time will be 3.5 second, with a 90% of users experiencing 4 seconds and no response taking longer than 7 seconds”.
As already mentioned, there is a lot of data to be captured at the start of a program and this can be difficult, if not impossible. Hence the technical architect, developers, testers and operational support staff all have inputs into this. From a testing perspective, some exploratory testing can be run to help identify the missing facets.

The model does not include elements such as priority, build cycle/iteration, etc. This is because a requirement is an entity in its own right. The management of that entity lies outside or the requirement, but is closely related.

From a testing perspective, a test plan will then explain how the tester would test this. The test analysis work then starts, by breaking the requirement into distinct that a tester can use to build and execute tests. The test will determine if mitigations need to be made (e.g. do not have a representative environment) and the work arounds that need to be put in place.

From the TMF session it became clear that the model seemed to hold out for performance requirements. As a result there are two challenges:
  1. Does the model work for performance requirements?Are the facets correct? Is there something missing, or is there too much?
  2. Can this model be used for the other Non-Functional areas?
    How can this model be used for Disaster Recover, Recovery, Reliability, Mai
    ntainability, Memory Management, Configuration, Portability, Installabillity, Security, Accessibility and Usability? If you are happy to share any real examples that anyone can use then please post them below.