Thursday, 4 October 2012

Characteristics of a Tester


All professions in the world have certain traits or features attached to it.  Software testing is no exception to this.  Many a times you must have seen that as a tester your inputs are not being accepted in the real time software projects.  You must also be working overtime for no fault of yours but due to bad planning by others.  You are often asked to “complete the testing” or “windup testing” but also asked to ensure to deliver a defect free product.  What should you do to be prepared in such situations?  How to tackle the different sections of persons in the software projects?  What characteristics portrays a successful “software tester”?  Read on…

Communication skills – oral and written

There are situations when as a tester, you would have to meet / interact with different sections of people – the business analysts, the design team, the development team and other testing teams.  In those situations, it is necessary to explain yourself (the issues/problems/clarifications that you may have) to them.  It has to be conveyed in the correct way so that the person(s) sitting across the table is/are able to understand clearly. Also, it is very important that any defect that is written by you has to be understood clearly by the receiving team.  

Be clear/concise in your oral and written communication skills.


Critical eye

Look out for details.  As a tester you should be in a position to look out for any implicit or unstated requirements that need clarification from the clients.  Always check for “what if “ scenarios and get the answers.  Think in multi dimensional way about a problem/requirement.

For example, during testing of a home banking application, there was a requirement to display all balances in dual currencies (Euro/local currency).   The business requirement stated the list of screens pertaining to the withdrawal and Balance Inquiry modules for which this requirement was supposed to be implemented.  On analysis this requirement was impacting more places like the screen in which the user checks out the history of transactions, or does a deposit, or sets up a standing instruction to the bank.  On discussing the issue with the client, it was found that they thought that this was implied!!

Have a critical eye for minor details though others might think them as insignificant.

Do not assume things

This is the continuation from previous point that as a tester it is very important that you should not assume any of the requirement / issues / problems / defects.  For example, never assume that in a screen where data is captured a field labeled “Clear” clears the entire content, though it is pretty evident.  Ask more questions and get the affirmations from the respective persons.

Also, remember that it is necessary to test the software from the end-user’s perspective and not just compliance with the requirements given by the client.

Never assume anything.  It may not be true as you think!


Convincing skills

It is a skill to convince and explain to a person who has developed the software as to why a defect report written is indeed a defect.  Put forth this in such a way that the developer also starts thinking along the lines of the scenario given in the defect report / requirement.  Remember in such situations to put you in other person’s shoes and speak accordingly.  Avoid using accusing words, do not get into any arguments and do not rise to any bait a developer may throw at you.  Concentrate on the issue and not on the person.

Also, while deciding the end of testing activities, it is important to bring out the impact of the open defects in the software and the implications to the business analyst / client or whoever is the logical end point contact.  It is necessary to push your point through either to continue with further testing activities or to stop testing.  

Develop good convincing skills!

Be factual

While reporting a defect or clarification of a requirement, be as factual as possible.  Do not bring in your own suggestions / views into picture.  Do not use words that describe the type of work or the person who developed the software. 

For example, do not bring in words like, “badly developed software”, “often crashing software”, etc.  Do not spell out such phrases even during interaction with the developer. Such activities will only be counter productive.  This is result in people not viewing your defect reports with seriousness. 

Be a good reporter of facts!


Effective listening

While discussing a defect report / requirement clarification, give a good hearing to other person’s view or perspective.  Understand the limitations of the software and try to find ways to resolve such issues.  For example, if there exists an issue that is pertaining to display / functionality in one particular browser version, ensure that it is listed as one of the “Known issues” in the Release Notes or Limitations section or in the Readme file.

Be flexible whenever required!

Provide constructive criticism

While discussing any issues / defects / requirements clarifications with developer / business analyst do not use words that is pointing to their personal characteristics.  Be very tactful in describing issues / defects and try not to point fingers at the person who developed the software or who collected the requirements for the software. 


Be empathetic

Listen to the developer / the business analyst who developed / collected the requirements carefully.  Try to understand the reality and limitations.  Do not argue over trivial issues.  Try to resolve limitations / issues in different possible ways.

Develop good rapport with other teams, it helps!

 

Effective follow up of issues

Many a times, defects are written and deferred to next release.  At the start of next release not all of them are picked up and fixed.  The status of defects that are left out from previous releases need to be discussed at the beginning of every release with the business analyst / development team.  Also, any issue that has not been converted to defect needs to have a closer follow up.  This needs to be documented as Open issues at the end of the testing activity.

Be good at follow-ups!

Good reviewer

Be a good reviewer and look out for inconsistencies of implementation of a particular requirement across different sections.  Review all user documentation apart from testing the software.  Check out for inconsistencies in the description of the software, look for glossary and an index.  This enables for easy search on topics of user’s choice.

Sharpen your reviewing skills!

  
Conclusion

To sum up, as a tester you need special set of interpersonal skills more than the technical and functional skills.  Make a start, be aware and practice.




Metrics used in Testing


Defect Metrics

Analysis on the defect report is done for management and client information. These are categorized as

Defect Age: Defect age is the time duration between the point of introduction of defect to the point of closure of the defect. This would give a fair idea on the defect set to be included for smoke test during Regression
                                   
Defect Analysis: The analysis of the defects can be done based on the severity, occurrence and category of the defects. As an example Defect Density is a metric which gives the ratio of defects in specific modules to the total defects in the application. Further analysis and derivation of metrics can be done based on the various components of the defect management.

Test Management Metrics

Analysis on the test management is done for management and client information. These are categorized as

Schedule: Schedule Variance is a metric determined by the ratio of the planned duration to the actual duration of the project.

Effort: Effort variance is a metric determined by the ratio of the planned effort to the actual effort exercised for the project.

Wednesday, 29 August 2012

Role of PDCA in Testing Process Improvement


If you recall, the PDCA approach, i.e., plan, do, check, and act, is a control mechanism used to control, supervise, govern, regulate, or restrain a system. The approach first defines the objectives of a process, develops and carries out the plan to meet those objectives, and checks to determine if the anticipated results are achieved. If they are not achieved, the plan is modified to fulfill the objectives.



The same PDCA quality cycle can be applied to software testing.

The Plan step of the continuous improvement process, when applied to software testing, starts with a definition of the test objectives, e.g., what is to be accomplished as a result of testing? Testing criteria do more than simply ensure that the software performs according to specifications. Objectives ensure that all responsible individuals contribute to the definition of the test criteria to maximize quality.

The Do step of the continuous improvement process when applied to software testing describes how to design and execute the tests included in the test plan. The test design includes test cases, test procedures and scripts, expected results, function/test case matrix, test logs, etc. The more definitive a test plan is, the easier the test design will be.

The Check step of the continuous improvement process when applied to software testing includes the evaluation of how the testing process is progressing. It is important to base decisions as much as possible on accurate and timely data. Testing metrics such as the number and types of defects, the workload effort, and the schedule status are keys. It is also important to create test reports. Summary and interim test reports should be written at the end of testing and at key testing checkpoints.

The Act step of the continuous improvement process when applied to software testing includes devising measures for appropriate actions relating to work that was not performed according to the plan or results that were not anticipated in the plan.


Mutation Testing


Does "mutation testing" ring any bells? I am sure it reminds you of X Men series, where we heard the word mutants a lot number of times.  So who were they? They were actually the ones whose genes were modified or have some special change from the rest. Likewise in Mutation Testing, we make the code a mutant and follow the changes in Test Suite behaviour.

What is Mutation Testing?

It is assumed that more cases a test suite contains, the higher the probability that the program will work correctly in the real world. Mutation Testing was introduced as a way of measuring the accuracy of test suites. In general, there is no easy way to tell if the test suite thoroughly tests the program or not. If the program passes the test suite, one may only say that program works correctly on all the cases that are included in the test suite. This checks only the correctness of Test suite. However, there is no mathematical way to measure how accurate the test suite is and the probability that the program will work correctly.

Concept of Killed and Equivalent mutants

The idea of mutation testing was introduced to solve the problem of measuring the accuracy of test suites. In mutation testing, one is in some sense trying to solve this problem by inverting the scenario.

The thinking goes as follows: Let’s assume that we have a perfect test suite, one that covers all possible cases. Let’s also assume that we have a perfect program that passes this test suite. If we change the code of the program (this process is called mutating) and we run the mutated program (mutant) against the test suite, we will have two possible scenarios:

  • The results of the program were affected by the code change and the test suite detects it. We assumed that the test suite is perfect, which means that it must detect the change. If this happens, the mutant is called a killed mutant.

  • The results of the program are not changed and the test suite does not detect the mutation. The mutant is called an equivalent mutant.

So the quality of the Test Suite is judged by this as follows:

Quality of the test suite/software = # of Killed Mutants / # of Mutants generated.

If Q<1; then it should be a warning sign to how sensitive the program is to the code changes. 

In the normal world, we do not have the perfect program and we do not have the perfect test suite. Thus, we can have one more scenario:

  • The results of the program are different, but the test suite does not detect it because it does not have the right test case.

If we again calculate the same ratio as above and we get a number smaller than 1 then that should also indicate the accuracy of the test suite.

In practice, there is no way to separate the effect that is related to test suite inaccuracy and that which is related to equivalent mutants. In the absence of other possibilities, one can accept the ratio of killed mutants to all the mutants as the measure of the test suite accuracy.


This C code example illustrates the ideas described above.

Could you detect the serious hidden errors in this test suite?

This test suite is quite representative of the test suites in the industry. It tests positive test cases, which means it tests if the program reports correct values for the correct inputs. It completely neglects illegal inputs to the program. Test Case 1 fully passes the test suite; however, it has serious hidden errors.

Now, let’s mutate the program. We can start with the following simple changes:

If we run this modified program against the test suite, we will get the following results:

Mutant 1 and 3 - program will completely pass the test suite
Mutant 2 - program will fail all test cases.

Mutants 1 and 3 do not change the output of the program, and are thus equivalent mutants.
The test suite does not detect them.

Mutant 2, however, is not an equivalent mutant. Test cases 1-4 will detect it through wrong output from the program. Test case 5 may have different behaviour on different machines. It may show up as bad output from the program, but at the same time, it may be visible as a program crash.

If we calculate the statistics, we see that we created three mutants and only one was killed.

Thus, the quality of the test suite = 1/3. As we can see, the number 1/3 is low. It is low because we generated two equivalent mutants. This number should serve as a warning that we are not testing enough. In fact, the program has two serious errors that should be detected by the test suite.


Kinds of Mutation

  • Value Mutation - these mutations involve changing the values of constants or parameters (by adding or subtracting values etc), e.g. loop bounds { being one out on the start or _nish is a very common error.
  • Decision Mutation - this involves modifying conditions to react potential slips and errors in the coding of conditions in programs, e.g. a typical mutation might be replacing a > by a < in a comparison.
  • Statement Mutations - these might involve deleting certain lines to reflect omissions in coding or swapping the order of lines of code. There are other operations, e.g. changing operations in arithmetic expressions. A typical omission might be to omit the increment on some variable in a while loop.

Benefits of Mutation testing

  • It provides the tester with a target. The tester has to develop a test data that is capable of killing all the generated mutants. Hence, we can generate an effective test data set that is powerful enough to find errors in the program.
  • Another advantage of mutation testing is that even if no error is found, it still gives the user information about the quality of the program tested.
  • Mutation testing makes a program less buggy and more reliable and increases the confidence in the working of the product – which is the bottom line of any software testing activity.

Protocol Testing


Have you ever thought that how two people chat face-to-face with each other irrespective of the distance between them?  Believe us, this is not a magic, it’s just the game of protocols. But “What is this protocol?????”

Protocol is the special set of rules that are followed in telecommunication network when two entities communicate with each other. Protocols specify interactions between the communicating entities. In fact protocol is an agreed-upon format for transmitting the data between two devices.

Importance of Protocol

Protocol plays a key role in today's communication world, without this it is not possible for one computer to communicate with another. Let’s have an example from daily life to understand the importance of protocol.
Just Imagine you are in 
France but you don't know how to speak French. Is it possible for you to talk to a man who doesn't know any other language except French?
Well certainly not!! Although you could try to communicate with him non verbally but again you'd find it difficult to let him know your thoughts and ideas; to get the ball rolling either you need to know French or he needs to know the language that you speak!!

From this example, it is apparent that if two human beings want to communicate then they must understand and speak in a common language.

This same analogy applies in modern communication systems; if equipment wants to receive information from, or send information to another, then both of them must use the same language to accomplish the task. This is where protocol comes into picture.  

“A Protocol is set of rules that govern communication between two or more equipments.”



Common protocols

Ø  IP (Internet Protocol)
Ø  UDP (User Datagram Protocol)
Ø  TCP (Transmission Control Protocol)
Ø  DHCP (Dynamic Host Configuration Protocol)
Ø  HTTP (Hypertext Transfer Protocol)
Ø  FTP (File Transfer Protocol)
Ø  Telnet (Telnet Remote Protocol)
Ø  SSH (Secure Shell Remote Protocol)
Ø  POP3 (Post Office Protocol 3)
Ø  SMTP (Simple Mail Transfer Protocol)
Ø  IMAP (Internet Message Access Protocol)
Ø  CDMA2000 1xRTT (CDMA 1x Radio Transmission Technology)


Testing Protocols

Product companies like Cisco, Nortel, Juniper, Alcatel, Huawei etc. have networking devices like routers, switches, modems, wireless access-point, firewalls. These devices use different protocols to communicate e.g. Cisco routers use EIGRP, OSPF etc. to exchange routing information. This implementation will certainly need some testing to ensure the communication is working fine through these protocols.

Protocol testing is the testing the functionality of any given protocol should function with respect to its RFC. Protocol testing involves testing functionality, protocol stack, interoperability, performance etc...

Usually protocol testing is done by connecting a DUT (Device under test) to other devices like routers/switches and configuring the protocol in it, then checking the packet structure of the packets send by the devices, checking protocol algorithm, their performance, scalability etc. using tools like wireshark, IxNetworks , Spirent etc. In general, protocol testers work by capturing the information exchanged between a Device under Test (DUT) and a reference device known to operate properly. In the example of a manufacturer producing a new keyboard for a personal computer, the Device under Test would be the keyboard and the reference device will be the PC. The information exchanged between the two devices is governed by rules set out in a technical specification called a "communication protocol". Both the nature of the communication and the actual data exchanged are defined by the specification. The captured information is decoded from raw digital form into a human-readable format that permits the protocol tester to easily review the exchanged information. Protocol testers vary in their abilities to display data in multiple views, automatically detect errors, determine the root causes of errors, generate timing diagrams, etc. Sometimes protocol testers may be required also generate protocol-correct traffic for functional testing, and may also have to deliberately introduce errors to test for the DUT's ability to deal with error conditions.

Since communication protocols are state-dependent (what should happen next depends on what previously happened), specifications are complex and the documents describing them can be hundreds of pages. This makes the job of protocol testing quite challenging.

Protocol testing is an essential step towards commercialization of standards-based products. It helps to ensure that products from different manufacturers will operate together properly ("interoperate") and so satisfy customer expectations. This type of testing is imperative for new emerging communication technologies.

Penetration Testing [Breaking-IN before the Bad Guys]



Have you ever been told by someone, what the password of your mail id is, and you are stunned by the fact that which personal mail has he read in your mailbox. But believe this is no longer just “script kiddies” breaking into your network and is more severe in a multi-tier network architectures, Web services, custom applications, and heterogeneous server platform environments, In the past several years, it has become apparent that there is real money to be made from criminal hacking, and identity theft is one of the world’s fastest growing problems.
Although there are many ways to secure systems and applications, but being in the shoes of a hacker is a complete new way to test yourself, by PENETRATION TESTING you can actually replicate the types of actions a malicious attacker would take.
Penetration testing has evolved from being ad hoc, to a robust and trustworthy testing methodology with the use of high quality commercial tools. In the hands of a properly trained penetration tester ,these new testing methodologies  provides a stable, quality-assured testing tool that can be used to accurately assess systems by penetrating existing vulnerability.

Let’s define Penetration Testing:
  
Penetration testing is a process of accessing your overall security before hackers do. It is a testing technique for discovering, understanding, and documenting all the security holes that can be found in a system.
The person who makes an attempt to gain access to resources without knowledge of usernames/passwords is identified as a hacker/attacker however the person who does it officially (with pre authorization) is identified as Penetration tester. In other words unauthorized attackers are hackers and authorized attackers are penetration testers.

A penetration tester must act as a hacker/attacker while doing penetration testing. It’s important to understand that it can never prove the absence of security flaws. It can only prove their presence.

Why Penetration Testing:

Ask yourself, do you want your application to be attacked by hackers? So attack it yourself.


Aspects of Penetration Testing -

*       Find Holes Now Before Somebody Else Does
The goal is that the penetration tester will find ways into the network so that they can be fixed before someone with less than honorable intentions discovers the same holes. We can think of a Penetration Test as annual medical physical checks where even if you believe you are healthy, your physician will run a series of tests (some old and some new) to detect dangers that have not yet developed symptoms.
*       Report Problems to Management
Penetration testing result helps to justify the lack of security in the environment to the upper level management.  Often an internal network team will be aware of weaknesses in the security of their systems but will have trouble getting management to support the changes that would be necessary to secure the system. By having an outside group with a reputation for security expertise analyzes a system; management will often respect that opinion more. Remember that ultimate responsibility for the security of IT assets rests with Management because it is they, not the administrators, who decide what the acceptable level of risk is for the organization.
*       Verify Secure Configurations
If the CSO (or security team) are confident in their actions and final results, the penetration test report verifies that they are doing a good job. The penetration test doesn’t make the network more secure, but it does identify gaps between knowledge and implementation.
*       Security Training For Network Staff
Penetration testing gives security people a chance to recognize and respond to a network attack. For example, if the penetration tester successfully compromises a system without anyone knowing, this could be indicative of a failure to adequately train staff on proper security monitoring.
*       Discover Gaps in Compliance
Using penetration testing as a means to identify gaps in compliance is a bit closer to auditing than true security engineering, but experienced penetration testers often breach a perimeter because someone did not get all the machines patched, or possibly because a non-compliant machine was put up “temporarily” and ended up becoming a critical resource.
*       Testing New Technology
The ideal time to test new technology is before it goes into production.  As it can often save time and money because it is easier to test and modify new technology while nobody is relying on it.

How do we perform penetration testing?
Although there are various methodologies that a penetration tester can follow, there are broadly 4 main phases:


4 Stage Penetration Testing Methodology
  • Planning - The planning phase is where the scope for the assignment is defined. Management approvals, documents and agreements etc. are signed. The penetration testing team prepares a definite strategy for the assignment.
  • Discovery - The discovery phase is where the actual testing starts; it can be regarded as an information gathering phase. This phase can be further categorized as follows:
    • Foot-printing phase - to get the maximum possible information available about the target organization and its systems using various means, both technical as well as non-technical. This involves searching the internet, querying various public repositories (databases, domain registrars, Usenet groups, mailing lists, etc.).
    • Scanning and Enumeration phase - The identifying live systems, open / filtered ports found, services running on these ports, mapping router / firewall rules, identifying the operating system details, network path discovery, etc.
    • Vulnerability Analysis phase - find any possible vulnerabilities existing in each target system. During this phase a penetration tester may use automated tools to scan the target systems for known vulnerabilities. These tools will usually have their own databases consisting of latest vulnerabilities and their details.
  • Attack - This is the phase that separates the Men from the Boys. This is at the heart of any penetration test, the most interesting and challenging phase.
This phase can be further categorized into:
    • Exploitation phase - During this phase a penetration tester will try to find exploits for the various vulnerabilities found in the previous phase.
    • Privilege Escalation phase - There are times when a successful exploit does not lead to root access. An effort has to be made at such point to carry further analysis on the target system to gain more information that could lead to getting administrative privileges, e.g. local vulnerabilities, etc.
Reporting -This stage can occur in parallel to the other three stages or at the end of the Attack stage. Many penetration testers do not concentrate on this stage and follow a hurried approach to make all the submissions. But this stage is probably the most important of all the phases, after all the organization is paying you for this final document.

Mobile App Testing – The Challenges


There is no denying that mobile phones, smart phones in particular, are in vogue these days. Gradually, businesses across various sectors are shifting focus towards mobile applications because users are more interested in browsing the web on their pocket size devices rather than viewing it from bulky PC’s. The tremendous growth of mobile users has opened up a new market which is flooded with various mobile platforms and devices and thousands of applications to run on them.
Imagine if the alarm on your phone didn’t go off in the morning, your old texts suddenly went missing, or you couldn’t make that important call. This dreadful thought actually sums up the importance of mobile application testing.
Below are some points which explain in brief that why special skills are required in Mobile application testing.

1) Diversity in Mobile Device platforms
Android, Blackberry, Nokia’s Symbian and Apple’s iPhone have together grabbed a large part of the Smartphone market. But these are not the only ones; there are many other platforms that are in use like BREW, BREWMP, Windows 7, etc. While testing any multiplatform mobile application, it would be required to test it on each platform while carrying out UI testing, functional testing etc. This poses a challenge as many of these mobile platforms when triggered by a common thing might behave in a different manner and also that the platform may have some limitations as well.
 2) Diversity of the Mobile Devices
There is a huge variety of mobile devices available in market with different screen sizes, different input methods like touch screen, QWERTY keypads, trackball and each of them having different hardware capabilities. Mobile devices also have different application runtimes like Binary Runtime Environment for Wireless Java, etc. Some mobile devices communicate through WAP and some use HTTP for communication. Thus extensive testing of mobile application is important to ensure compatibility of the application with devices having a variety of the above characteristics.
 3) Diversity in Hardware Configuration
Apart from diversity in platform and mobile devices, there is diversity in their hardware also. The mobile devices come with various processors, RAM, internal memory, various sensors like proximity sensor, Accelerometer sensor, GPS, Gyroscope etc. There are many challenges of diverse hardware configurations like Mobile environment provides lesser memory and processing power for computing as compared to PC which reduces processing speed and causes variations in performance of applications. Therefore, exhaustive testing of mobile applications is required to deliver optimum performance for all desired configurations of hardware.
4) Diversity in Network
We know there is always unpredictability in network latency when applications communicate over network boundaries, leading to inconsistent data transfer speeds. It demands testing to measure the performance of applications for various network bandwidths of various service providers. Wireless network use data optimizers like gateways to deliver content and it may result in decreased performance in case of heavy traffic.  Therefore, testing should be performed to determine the network traffic level at which gateway capabilities will impact the performance of the mobile application.
The above mentioned challenges are just a few of the lot, but should be enough to emphasize the need of thorough and diversified testing of Mobile applications using specialized skills.

Saturday, 14 July 2012

Testing via Equivalence Partitioning

Equivalence partitioning is the process of defining the optimum number of tests by: 

- Reviewing documents such as the Functional Design Specification and Detailed Design Specification, and identifying each input condition within a function
- Selecting input data that is representative of all other data that would likely invoke the same process for that particular condition

Defining Tests

A number of items must be considered when determining the tests using the equivalence partitioning method, including:

- All valid input data for a given condition are likely to go through the same process
- Invalid data can go through various processes and need to be evaluated more carefully.  For example
  1. A blank entry may be treated differently than an incorrect entry
  2. A value that is less than a range of values may be treated differently than a value that is greater
  3. If there is more than one error condition within a particular function, one error may override the other, which means the subordinate error does not get tested unless the other value is valid

Defining Test Cases

Create test cases that incorporate each of the tests.  For valid input, include as many tests as possible in one test case.  For invalid input, include only one test in a test case in order to isolate the error.  Only the invalid input test condition needs to be evaluated in such tests, because the valid condition has already been tested.

EXAMPLE OF EQUIVALENCE PARTITIONING

Conditions to be Tested

The following input conditions will be tested:

For the first three digits of all social insurance (security) numbers, the minimum number is 111 and the maximum number is 222

For the fourth and fifth digits of all social insurance (security) numbers, the minimum number is 11 and the maximum number is 99 

Defining Tests

Identify the input conditions and uniquely identify each test, keeping in mind the items to consider when defining tests for valid and invalid data.

The tests for these conditions are:

The first three digits of the social insurance (security) number are:

1.        =  or > 111 and = or < 222, (valid input),

2.        < 111, (invalid input, below the range),

3.        > 222, (invalid input, above the range),

4.        blank, (invalid input, below the range, but may be treated differently).

The fourth and fifth digits of the social insurance (security) number are:

5.        = or > 11 and = or < 99, (valid input),

6.        < 11, (invalid input, below the range),

7.        > 99, (invalid input, above the range),

8.        blank, (invalid input, below the range, but may be treated differently).

Using equivalence partitioning, only one value that represents each of the eight equivalence classes needs to be tested.

Defining Test Cases

After identifying the tests, create test cases to test each equivalence class, (i.e., tests 1 through 8).

Create one test case for the valid input conditions, (i.e., tests 1 and 5), because the two conditions will not affect each other.

Identify separate test cases for each invalid input, (i.e., tests 2 through 4 and tests 6 through 8). Both conditions specified, (i.e., condition 1 - first three digits, condition 2 - fourth and fifth digits), apply to the social insurance (security) number.  Since equivalence partitioning is a type of black-box testing, the tester does not look at the code and, therefore, the manner in which the programmer has coded the error handling for the social insurance (security) number is not known.   Separate tests are used for each invalid input, to avoid masking the result in the event one error takes priority over another.  For example, if only one error message is displayed at one time, and the error message for the first three digits takes priority, then testing invalid inputs for the first three digits and the fourth and fifth digits together, does not result in an error message for the fourth and fifth digits. In tests B through G, only the results for the invalid input need to be evaluated, because the valid input was tested in test case A. 

Suggested test cases:

Test Case A - Tests 1 and 5, (both are valid, therefore there is no problem with errors) 

Test Case B - Tests 2 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case C - Tests 3 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case D - Tests 4 and 5, (only the first one is invalid, therefore the correct error should be produced)

Test Case E - Tests 1 and 6, (only the second one is invalid, therefore the correct error should be produced)

Test Case F - Tests 1 and 7, (only the second one is invalid, therefore the correct error should be produced)

Test Case G - Tests 1 and 8, (only the second one is invalid, therefore the correct error should be produced) 

Other Types of Equivalence Classes

The process of equivalence partitioning also applies to testing of values other than numbers. Consider the following types of equivalence classes:

- A valid group versus an invalid group, (e.g., names of employees versus names of individuals who are not employees)

- A valid response to a prompt versus an invalid response, (e.g., Y versus N and all non-Y responses)

- A valid response within a time frame versus an invalid response outside of the acceptable time frame, (e.g., a date within a specified range versus a date less than the range and a date greater than the range)



Testing via Boundary Value Analysis


The purpose of boundary value analysis is to concentrate the testing effort on error prone areas by accurately pinpointing the boundaries of conditions (e.g., a programmer may specify >, when the requirement states > or =).

Defining the Tests

To determine the tests for this method, first identify valid and invalid input and output conditions for a given function. 

Then, identify the tests for situations at each boundary.  For example, one test each for >, =, <, using the first value in the > range, the value that is equal to the boundary, and the first value in the < range. 

Boundary conditions do not need to focus only on values or ranges, but can be identified for many other boundary situations as well, such as end of page, (i.e., identify tests for production of output that is one line less than the end of page, exactly to the end of page, and one line over the end of page).  The tester needs to identify as many situations as possible, the list of Common Extreme Test Conditions may help with this process:
     
COMMON EXTREME TEST CONDITIONS
  • zero or negative values
  • zero or one transaction
  • empty files
  • missing files (file name not resolved or access denied)
  • multiple updates of one file
  • full, empty, or missing tables
  • widow headings (i.e., headings printed on pages with no details or totals)
  • table entries missing
  • subscripts out of bounds
  • sequencing errors
  • missing or incorrect parameters or message formats
  • concurrent access of a file
  • file space overflow

EXAMPLE OF BOUNDARY VALUE ANALYSIS

Function to be Tested

For a function called billing, the following specifications are defined:

  • Generate a bill for accounts with a balance owed > 0
  • Generate a statement for accounts with a balance owed < 0 (credit)
  • For accounts with a balance owed > 0
  • place amounts for which the run date is < 30 days from the date of service in the current total
  • place amounts for which the run date is = or > 30 days, but < or = 60 days, from the date of service, in the 30 to 60 day total
  • place amounts for which the run date is > 60 days, but < or = 90 days, from the date of service, in the 61 to 90 day total
  • place amounts for which the run date is > 90 days, from the date of service, in the 91 days and over total
  • For accounts with a balance owed > or = $10.00, for which the run date is = or > 30 days from the date of service, calculate a $3.00 or 1% late fee, whichever is greater

Input and Output Conditions

Identify the input, (i.e., information is supplied to the function) and output, (i.e., information is produced by the function) conditions for the function.

The input conditions are identified as:

  • balance owed
  • balance owed for late fee

The output conditions are identified as:
  • age of amounts
  • age of amounts for late fee
  • calculation for late fee

Defining Tests

Define tests for the boundary situations for each of the input and output conditions.  For example:

Balance Owed

01.         > 0
02.         = 0
03.         < 0

Age of Amounts

balance owed > 0 and

04.       run date - date of service = 00
05.       run date - date of service = 29
06.       run date - date of service = 30
07.       run date - date of service = 31
08.       run date - date of service = 59
09.       run date - date of service = 60
10.       run date - date of service = 61
11.       run date - date of service = 89
12.       run date - date of service = 90
13.       run date - date of service = 91

Balance Owed for Late Fee

run date - date of service > 30 and

14.       balance owed = $9.99
15.       balance owed = $10.00
16.       balance owed = $10.01

Age of Amount for Late Fee

balance owed > $10.00 and

17.       run date - date of service = 29
18.       run date - date of service = 30
19.       run date - date of service = 31

Calculation for Late Fee

balance owed > $10.00, run date - date of service > 30 and

20.       1% late fee < $3.00
21.       1% late fee = $3.00
22.       1% late fee > $3.00