Saturday, 14 July 2012

Testing via Equivalence Partitioning

Equivalence partitioning is the process of defining the optimum number of tests by: 

- Reviewing documents such as the Functional Design Specification and Detailed Design Specification, and identifying each input condition within a function
- Selecting input data that is representative of all other data that would likely invoke the same process for that particular condition

Defining Tests

A number of items must be considered when determining the tests using the equivalence partitioning method, including:

- All valid input data for a given condition are likely to go through the same process
- Invalid data can go through various processes and need to be evaluated more carefully.  For example
  1. A blank entry may be treated differently than an incorrect entry
  2. A value that is less than a range of values may be treated differently than a value that is greater
  3. If there is more than one error condition within a particular function, one error may override the other, which means the subordinate error does not get tested unless the other value is valid

Defining Test Cases

Create test cases that incorporate each of the tests.  For valid input, include as many tests as possible in one test case.  For invalid input, include only one test in a test case in order to isolate the error.  Only the invalid input test condition needs to be evaluated in such tests, because the valid condition has already been tested.

EXAMPLE OF EQUIVALENCE PARTITIONING

Conditions to be Tested

The following input conditions will be tested:

For the first three digits of all social insurance (security) numbers, the minimum number is 111 and the maximum number is 222

For the fourth and fifth digits of all social insurance (security) numbers, the minimum number is 11 and the maximum number is 99 

Defining Tests

Identify the input conditions and uniquely identify each test, keeping in mind the items to consider when defining tests for valid and invalid data.

The tests for these conditions are:

The first three digits of the social insurance (security) number are:

1.        =  or > 111 and = or < 222, (valid input),

2.        < 111, (invalid input, below the range),

3.        > 222, (invalid input, above the range),

4.        blank, (invalid input, below the range, but may be treated differently).

The fourth and fifth digits of the social insurance (security) number are:

5.        = or > 11 and = or < 99, (valid input),

6.        < 11, (invalid input, below the range),

7.        > 99, (invalid input, above the range),

8.        blank, (invalid input, below the range, but may be treated differently).

Using equivalence partitioning, only one value that represents each of the eight equivalence classes needs to be tested.

Defining Test Cases

After identifying the tests, create test cases to test each equivalence class, (i.e., tests 1 through 8).

Create one test case for the valid input conditions, (i.e., tests 1 and 5), because the two conditions will not affect each other.

Identify separate test cases for each invalid input, (i.e., tests 2 through 4 and tests 6 through 8). Both conditions specified, (i.e., condition 1 - first three digits, condition 2 - fourth and fifth digits), apply to the social insurance (security) number.  Since equivalence partitioning is a type of black-box testing, the tester does not look at the code and, therefore, the manner in which the programmer has coded the error handling for the social insurance (security) number is not known.   Separate tests are used for each invalid input, to avoid masking the result in the event one error takes priority over another.  For example, if only one error message is displayed at one time, and the error message for the first three digits takes priority, then testing invalid inputs for the first three digits and the fourth and fifth digits together, does not result in an error message for the fourth and fifth digits. In tests B through G, only the results for the invalid input need to be evaluated, because the valid input was tested in test case A. 

Suggested test cases:

Test Case A - Tests 1 and 5, (both are valid, therefore there is no problem with errors) 

Test Case B - Tests 2 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case C - Tests 3 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case D - Tests 4 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case E - Tests 1 and 6, (only the second one is invalid, therefore the correct error should be produced)

Test Case F - Tests 1 and 7, (only the second one is invalid, therefore the correct error should be produced)

Test Case G - Tests 1 and 8, (only the second one is invalid, therefore the correct error should be produced) 

Other Types of Equivalence Classes

The process of equivalence partitioning also applies to testing of values other than numbers. Consider the following types of equivalence classes:

- A valid group versus an invalid group, (e.g., names of employees versus names of individuals who are not employees)

- A valid response to a prompt versus an invalid response, (e.g., Y versus N and all non-Y responses)

- A valid response within a time frame versus an invalid response outside of the acceptable time frame, (e.g., a date within a specified range versus a date less than the range and a date greater than the range)



Testing via Boundary Value Analysis


The purpose of boundary value analysis is to concentrate the testing effort on error prone areas by accurately pinpointing the boundaries of conditions (e.g., a programmer may specify >, when the requirement states > or =).

Defining the Tests

To determine the tests for this method, first identify valid and invalid input and output conditions for a given function. 

Then, identify the tests for situations at each boundary.  For example, one test each for >, =, <, using the first value in the > range, the value that is equal to the boundary, and the first value in the < range. 

Boundary conditions do not need to focus only on values or ranges, but can be identified for many other boundary situations as well, such as end of page, (i.e., identify tests for production of output that is one line less than the end of page, exactly to the end of page, and one line over the end of page).  The tester needs to identify as many situations as possible, the list of Common Extreme Test Conditions may help with this process:
     
COMMON EXTREME TEST CONDITIONS
  • zero or negative values
  • zero or one transaction
  • empty files
  • missing files (file name not resolved or access denied)
  • multiple updates of one file
  • full, empty, or missing tables
  • widow headings (i.e., headings printed on pages with no details or totals)
  • table entries missing
  • subscripts out of bounds
  • sequencing errors
  • missing or incorrect parameters or message formats
  • concurrent access of a file
  • file space overflow

EXAMPLE OF BOUNDARY VALUE ANALYSIS

Function to be Tested

For a function called billing, the following specifications are defined:

  • Generate a bill for accounts with a balance owed > 0
  • Generate a statement for accounts with a balance owed < 0 (credit)
  • For accounts with a balance owed > 0
  • place amounts for which the run date is < 30 days from the date of service in the current total
  • place amounts for which the run date is = or > 30 days, but < or = 60 days, from the date of service, in the 30 to 60 day total
  • place amounts for which the run date is > 60 days, but < or = 90 days, from the date of service, in the 61 to 90 day total
  • place amounts for which the run date is > 90 days, from the date of service, in the 91 days and over total
  • For accounts with a balance owed > or = $10.00, for which the run date is = or > 30 days from the date of service, calculate a $3.00 or 1% late fee, whichever is greater

Input and Output Conditions

Identify the input, (i.e., information is supplied to the function) and output, (i.e., information is produced by the function) conditions for the function.

The input conditions are identified as:

  • balance owed
  • balance owed for late fee

The output conditions are identified as:
  • age of amounts
  • age of amounts for late fee
  • calculation for late fee

Defining Tests

Define tests for the boundary situations for each of the input and output conditions.  For example:

Balance Owed

01.         > 0
02.         = 0
03.         < 0

Age of Amounts

balance owed > 0 and

04.       run date - date of service = 00
05.       run date - date of service = 29
06.       run date - date of service = 30
07.       run date - date of service = 31
08.       run date - date of service = 59
09.       run date - date of service = 60
10.       run date - date of service = 61
11.       run date - date of service = 89
12.       run date - date of service = 90
13.       run date - date of service = 91

Balance Owed for Late Fee

run date - date of service > 30 and

14.       balance owed = $9.99
15.       balance owed = $10.00
16.       balance owed = $10.01

Age of Amount for Late Fee

balance owed > $10.00 and

17.       run date - date of service = 29
18.       run date - date of service = 30
19.       run date - date of service = 31

Calculation for Late Fee

balance owed > $10.00, run date - date of service > 30 and

20.       1% late fee < $3.00
21.       1% late fee = $3.00
22.       1% late fee > $3.00

Testing via Error Guessing


The purpose of error guessing is to focus the testing activity on areas that have not been handled by the other more formal techniques, such as equivalence partitioning and boundary value analysis.  Error guessing is the process of making an educated guess as to other types of areas to be tested.  For example, educated guesses can be based on items such as metrics from past testing experiences, or the tester's identification of situations in the Functional Design Specification or Detailed Design Specification, that are not addressed clearly.

Examples of Error Prone Situations

Though metrics from past test experiences are the optimum basis for error guessing, these may not be available.  Examples of error prone situations include:
  • initialization of data, (e.g., repeat a process to see if data is properly removed)
  • wrong kind of data, (e.g., negative numbers, non-numeric versus numeric)
  • handling of real data, (i.e., test using data created through the system or real records, because
  • programmers tend to create data that reflects what they are expecting)
  •  error management, (e.g., proper prioritization of multiple errors, clear error messages, proper retention of data when an error is received, processing continues after an error if it is supposed to),         calculations, (e.g., hand calculate items for comparison)
  • restart/recovery, (i.e., use data that will cause a batch program to terminate before completion and determine if the restart/recovery process works properly)
  • proper handling of concurrent processes, (i.e., for event driven applications, test multiple processes concurrently)


Example of an Unclear Specification

An example of a test based on an unclear specification is illustrated using a specification from a function called billing:

  • For accounts with a total balance owed > or = $10.00, for which the run date is = or > 30 days from the date of service, calculate a $3.00 or 1% late fee, whichever is greater.


The specification does not clearly state how to handle the late fee if the balance owed > $10.00, but the balance owed is composed of amounts owed that have different dates of service, and the amount owed for which the run date is = or > 30 days from the date of service is < $10.00.  Using error guessing a test is designed to address this situation:

  • Balance owed of $13.00 is composed of amount owed of $6.00, (which has a run date - date of service < 30 days), and amount owed of $7.00, (which has a run date - date of service > 30 days).


Tuesday, 3 July 2012

Mobile Application Testing Challenges


Testing!!! No one really wants to do it. It’s expensive. It’s time consuming. But fortunately, it’s needed to ensure that our consumers have a positive experience when they use our mobile applications. And it’s vital that you make sure that the experience is a great one for every consumer every time they use our applications, starting with that very first time. But when it comes to testing mobile applications there are unique challenges.
The mobile enterprise is no longer on its way – it is here. This is creating a mobile app revolution that is driving the need for fast, effective application testing that mimics your user base in terms of technical environments, locations, and demographics. And while it’s tempting to think that mobile apps won’t alter your company or industry, no space is exempt from the mobile revolution.
According to a recent survey by Bloomberg Business week Research Services, enterprise mobility is no longer just for email. Employees are using mobile apps to access CRM systems, financial results, marketing campaigns, and to track orders, to name just a few. In fact, ABI Research anticipates worldwide enterprise mobile data revenues will reach $133 billion by 2014.
New apps for BlackBerry, iPhone, iPad, and Android are making deep inroads into enterprise organizations in industries as diverse and mature as healthcare, finance, education, media, and retail. This means that the pressure to get high quality mobile apps built, tested, and launched has never been greater. With so much critical data flowing to smart phones and tablets, companies must ensure that their mobile apps are stable, private, and secure. Even the smallest flaw can ruin a mobile app, and sometimes, the company behind it.

If an organization does not focus on the functionality, usability, reliability, and security of the application, they may find themselves in the awkward position of explaining to their customers, or the CEO, why their application was rejected by the apps store, or why users are sharing their dissatisfaction on Twitter, Facebook, TechCrunch, and others. This mobile quality challenge calls for a better way to test, one that meets the “inthewild” testing demands of mobile apps.

Three Alternative Testing Methods -
The three testing approaches that have historically been used in mobile are insufficient for the challenges of this new reality. That doesn’t mean they are bad or illintentioned, merely that they aren’t sufficient on their own. Here’s a quick summary:

  1. InHouse: Building a comprehensive inhouse testing lab is extremely timeconsuming and expensive. Imagine the expense of building an inhouse team and lab capable of assuring the functionality for iPhone, Blackberry and Android handsets (of all makes and models) across wireless carriers in the U.S., U.K, Australia, China and Japan. For reasons of cost and coverage, it’s no surprise that mobile app companies rarely rely solely on inhouse testing resources.
  2. Emulators/Simulators: One of the biggest challenges for mobile developers is that traditional testing is occurring in an environment far removed from the real world. The gap between “inthelab” simulation and “inthewild” usage is vast and cannot be ignored. The convenience of simulators and emulators has made it easy to be lured into a false sense of security, but they should not be considered a substitute for realworld, ondevice testing.
  3. Beta Testers: It’s rare for a software company to attract a large group of beta testers to test their app. After all, not every company can be Google, with its wildly popular beta versions. But even if you can assemble a large beta group, the method still falls short on its own. First, if a beta goes poorly, most companies can’t afford to have it happen in the bright lights of the blogosphere or Twitter. Beta testers are more often like users in that they will only try to get your app to function properly; a real tester will systemically structure their usage to identify weaknesses in your app.


CROWDSOURCED Mobile App Testing

The increasingly fragmented device and platform environment has escalated the demand for comprehensive, alwayson global testing; however, testing mobile apps has traditionally been difficult and expensive. No matter what type of mobile app, multimedia, chat, business, or productivity tools, all mobile app developers face the same testing complexity across:
  1. Handset Makers & Models
  2. Operating Systems
  3. Browsers
  4. Wireless Carriers
  5. Languages (for multigeo apps)
  6. Location, Location, Location


Through crowdsourcing, companies can meet mobile’s “in the wild” testing needs by utilizing a community of diverse and talented professional testers, capable of testing their app across any and all criteria, and on an ondemand basis. Your users are distributed around the country (or globe), so your testers should be too. And just as your users utilize your app outside the sterile confines of the testing lab, under “in the wild” conditions so too should your testers. With the rapid evolution of crowdsourced testing, top companies are doing the impossible: maintaining app quality, achieving broad testing coverage, meeting launch dates, and staying within budget.