Thursday 1 August 2013

Update from TMF "Agile Technical Testing – Reality or Myth" session

The following is a summary of the forum discussion, with details about the forum and the slide at the end of this post:

Discussion
The general characteristics identified for a Technical Tester were:
       Understanding code and how software is built the ability to look inside the system and not restricted to a black box view of it.
       Strong knowledge of test tool sets.
       Excellent understanding of test techniques and standards. A particular ability in exploratory testing was highlighted as important.
       Some domain knowledge. The domain knowledge included both the business and technology.
       Ability to write good code, particularly frameworks and tool smith activities
       Teach/train/coach developers to test.
       Focus on the functional testing aspects.

In summary, the general description seemed to be:

Im a super genius functional tester!

The group was challenged with the notion of what values this person should have (see Kent Bech). This was not defined in the group, but worth considering as a future discussion.

The group indicated that although all of this would be in the ideal technical tester, real people would have a mix of these skills in varying degrees. It was noted that the following roles were also complimentary and important with an Agile team:

       Business Domain Specialist Tester: Someone who has a very strong understanding of the domain, particularly a business one. For example this would be some who really understands derivatives trading and the legislation impacting it.
       Technology Domain/Non-functional/Quality Characteristics Specialist Tester: These people would be your Usability, Security, Performance etc. specialists. These people would be from a more scientific / engineering background, with specific technical knowledge.
       Requirements Tester:  In the past this would have been known as a Test Analyst. This is a person who would take requirements and deconstruct them into efficient and effective tests. Discussion include the efficiency saving that these people could bring in optimizing the number of tests that would then need to be automated.
       Bug Sniffers: The group described this as Those people that are just great at finding defects. Typically this was considered by the group to be people from the business, brought in to have a go and good a finding those unusual defeats.
       Exploratory Testers: Specialist testers with strong exploratory testing approach knowledge and practice.

The session then moved onto discussing some of the blockers to performing technical testing. The following points were made:

       Organisation Culture: The culture of the organisation prevents some of the techniques required to be implemented. This includes for example, developers not wanting to sit near testers, not valuing testing, etc.
       Definition of technical: The term technical is used in different ways and a lack of clear understanding of the term prevents the use of the characteristics. This session proved useful in assisting everyone to clarify their own understanding.
       Lack of skills: The people in the organisation do not have the required ability to implement some of the technical skills. The discussion covered such ideas as the need for some organisations to replace existing staff with people with these capabilities, if people were not prepared to retrain.
       Lack of awareness: Teams and organisations do not realise that in changing to a more Agile approach that the skills and characteristics that the staff need to exhibit are different.
       Project in permanent Tactical Mode: The team are continually sprinting and have no time to investigate new techniques and technologies that would enable the development process to move out of a tactical mode.
       Cost: Technical testing requires people with significant skills and this costs more.
       Quality: Many organisations work to an unstated quality level. This impacts the work that an Agile team would work to, but by not being explicit this can be misunderstood. The other aspect is that for some organisations, it is cheaper to fix post go-live.

The group made a strong distinction between the noun and verb for Technical. The noun applied to a person and the traits that we expect from them. The verb applied to a variety of testing to be run.  A major outcome of the session was that:

You cannot do Agile without Technical Testing

However this does not necessarily mean that you need a Technical Tester. As long as you have a team that can cover the characteristics identified above for a Technical Tester. Anyone within an Agile team can deliver the results of these characteristics. The other factor that was clearly identified was that the context of the project was critical as to what technical means to the team. 

Background
I ran a session at the Test Managers Forum (31 July 2013) in London. The session was constructed as a discussion forum to understand what the group of senior test practitioners who were delivering in real situations, viewed the technical testing. The group consists of approximately 45 people and the discussion was run over a 75 minutes session. The slides for the session are available here:

Friday 29 July 2011

The Perfect Non-Function Requirement?

I would like to thank all of you who attended my session at the July 2011 Test Managers Forum. The session was a discussion group, looking at proposed model for non-functional requirement (NFR).  Based on the presentation and the “NFR Requirements Cube” (print it out and glue it together), the following is a summary of the proposal and discussion. There is a challenge at the end of this so please read on!

The driver for this model was that the common complaints about NFRs are that they are difficult to scope, understand, define etc. When I started looking at performance requirements in detail and think about what information a developer and a tester may need to know, I evolved the following model.
At this point I’ll start with the caveats about models and their uses. Models have their places, but all models do fail at some point. Not all models can be used and the context of the situation will define what you use. The following model is one example that may be of use.
The other point is the term “Non-functional”. I personally am not keen on this term, but too many organisations, architects, business analysts, developers, standards, formalised training (not just testing) certifications use the term. A separate thread would be interesting but I want to stick to the principle of the proposed model.
The model is based on a cube (a 6 sided dice if you like). Each face (facet) describes an intrinsic part of the requirements structure. The following describes each facet and what is intended to cover. It was clearly determined at the TMF session that to expect all of the facets to be covered at the start of a project would not be possible, rather the requirement would evolve during the lifecycle of the project, iterating into more detail. It was also highlighted that a group of requirements may share some of the same facets.
1.   Objective This is the high level description, ideally in layman’s terms, about what the system should achieve and why. Some organisations may have these as High Level requirements, acceptance criteria, user stories or High level test conditions. An example from an easy to relate to User Story perspective could be:
“As a <member of the public>
I want <the application to adequately cope with high loads>
So that <I can to purchase a ticket, quickly for a highly subscribed event>”
As you can see, the objective is written in plain language, i.e. no technical terms, which provide the context for the requirement, its need and how it lies within the overall approach. The caution is to prevent too much data being added here, rather any details should be added as notes or supplementary data. This data will then be used to scope out the remaining five facets.
2.   Scenario This describes the details of the requirement, such as what time of day, the numbers of users, the types of users, the combinations of actions etc. The typical scenarios that a web based application relate to the time of day (e.g 9  am when 70% of the staff team all need to login), the different roles of users that will be accessing, what back end transactions/batch jobs will be happening etc. This provides the identification of all the actions that are being undertaken by the system under test.
3.   Profile The profile describes the timelines associated with the scenario. From a performance perspective this could be in form of: I want to load 10,000 concurrent users onto the browser over a period of 1 hour, maintain that load for 1 hour and then ramp down to 0 users in 1.5 hours.

4.   Environment Describe what the expectations are for the environment/infrastructure. These include elements around: data, servers, monitoring, load balancer configurations, network settings etc. Other important elements include caching mechanisms. Worth considering are also the network/infrastructure management systems that are under way and using the Ops team to monitor whilst the test is under way.
5.   Measurement What measurements do we require? For a NFR this can be particularly unclear, as we may want to measure some level of responsiveness in the user interface, however we are also concerned about the CPU and memory usage of critical elements of the system. There may be custom measures that a specific application or set up will need. So this facet may include statements of: CPU will not go above 50%, users responsiveness will be within 5 seconds. One caution here is to also state where the measurement will be made at, e.g. is this the response time at a physical user interface or is this the server response time.
6.   Variance The systems we tend to test are complex. There are many servers, switches, routers, operating systems, interfaces, data and users. This makes NFR testing peculiar in that there are rarely Pass/Fail criteria, rather there is an acceptable range of values. The variance facet is there to record this. For a performance related requirement, this facet could have a statement such as: “the average response time will be 3.5 second, with a 90% of users experiencing 4 seconds and no response taking longer than 7 seconds”.
As already mentioned, there is a lot of data to be captured at the start of a program and this can be difficult, if not impossible. Hence the technical architect, developers, testers and operational support staff all have inputs into this. From a testing perspective, some exploratory testing can be run to help identify the missing facets.

The model does not include elements such as priority, build cycle/iteration, etc. This is because a requirement is an entity in its own right. The management of that entity lies outside or the requirement, but is closely related.

From a testing perspective, a test plan will then explain how the tester would test this. The test analysis work then starts, by breaking the requirement into distinct that a tester can use to build and execute tests. The test will determine if mitigations need to be made (e.g. do not have a representative environment) and the work arounds that need to be put in place.

From the TMF session it became clear that the model seemed to hold out for performance requirements. As a result there are two challenges:
  1. Does the model work for performance requirements?Are the facets correct? Is there something missing, or is there too much?
  2. Can this model be used for the other Non-Functional areas?
    How can this model be used for Disaster Recover, Recovery, Reliability, Mai
    ntainability, Memory Management, Configuration, Portability, Installabillity, Security, Accessibility and Usability? If you are happy to share any real examples that anyone can use then please post them below.