Wednesday, 29 August 2012

Role of PDCA in Testing Process Improvement


If you recall, the PDCA approach, i.e., plan, do, check, and act, is a control mechanism used to control, supervise, govern, regulate, or restrain a system. The approach first defines the objectives of a process, develops and carries out the plan to meet those objectives, and checks to determine if the anticipated results are achieved. If they are not achieved, the plan is modified to fulfill the objectives.



The same PDCA quality cycle can be applied to software testing.

The Plan step of the continuous improvement process, when applied to software testing, starts with a definition of the test objectives, e.g., what is to be accomplished as a result of testing? Testing criteria do more than simply ensure that the software performs according to specifications. Objectives ensure that all responsible individuals contribute to the definition of the test criteria to maximize quality.

The Do step of the continuous improvement process when applied to software testing describes how to design and execute the tests included in the test plan. The test design includes test cases, test procedures and scripts, expected results, function/test case matrix, test logs, etc. The more definitive a test plan is, the easier the test design will be.

The Check step of the continuous improvement process when applied to software testing includes the evaluation of how the testing process is progressing. It is important to base decisions as much as possible on accurate and timely data. Testing metrics such as the number and types of defects, the workload effort, and the schedule status are keys. It is also important to create test reports. Summary and interim test reports should be written at the end of testing and at key testing checkpoints.

The Act step of the continuous improvement process when applied to software testing includes devising measures for appropriate actions relating to work that was not performed according to the plan or results that were not anticipated in the plan.


Mutation Testing


Does "mutation testing" ring any bells? I am sure it reminds you of X Men series, where we heard the word mutants a lot number of times.  So who were they? They were actually the ones whose genes were modified or have some special change from the rest. Likewise in Mutation Testing, we make the code a mutant and follow the changes in Test Suite behaviour.

What is Mutation Testing?

It is assumed that more cases a test suite contains, the higher the probability that the program will work correctly in the real world. Mutation Testing was introduced as a way of measuring the accuracy of test suites. In general, there is no easy way to tell if the test suite thoroughly tests the program or not. If the program passes the test suite, one may only say that program works correctly on all the cases that are included in the test suite. This checks only the correctness of Test suite. However, there is no mathematical way to measure how accurate the test suite is and the probability that the program will work correctly.

Concept of Killed and Equivalent mutants

The idea of mutation testing was introduced to solve the problem of measuring the accuracy of test suites. In mutation testing, one is in some sense trying to solve this problem by inverting the scenario.

The thinking goes as follows: Let’s assume that we have a perfect test suite, one that covers all possible cases. Let’s also assume that we have a perfect program that passes this test suite. If we change the code of the program (this process is called mutating) and we run the mutated program (mutant) against the test suite, we will have two possible scenarios:

  • The results of the program were affected by the code change and the test suite detects it. We assumed that the test suite is perfect, which means that it must detect the change. If this happens, the mutant is called a killed mutant.

  • The results of the program are not changed and the test suite does not detect the mutation. The mutant is called an equivalent mutant.

So the quality of the Test Suite is judged by this as follows:

Quality of the test suite/software = # of Killed Mutants / # of Mutants generated.

If Q<1; then it should be a warning sign to how sensitive the program is to the code changes. 

In the normal world, we do not have the perfect program and we do not have the perfect test suite. Thus, we can have one more scenario:

  • The results of the program are different, but the test suite does not detect it because it does not have the right test case.

If we again calculate the same ratio as above and we get a number smaller than 1 then that should also indicate the accuracy of the test suite.

In practice, there is no way to separate the effect that is related to test suite inaccuracy and that which is related to equivalent mutants. In the absence of other possibilities, one can accept the ratio of killed mutants to all the mutants as the measure of the test suite accuracy.


This C code example illustrates the ideas described above.

Could you detect the serious hidden errors in this test suite?

This test suite is quite representative of the test suites in the industry. It tests positive test cases, which means it tests if the program reports correct values for the correct inputs. It completely neglects illegal inputs to the program. Test Case 1 fully passes the test suite; however, it has serious hidden errors.

Now, let’s mutate the program. We can start with the following simple changes:

If we run this modified program against the test suite, we will get the following results:

Mutant 1 and 3 - program will completely pass the test suite
Mutant 2 - program will fail all test cases.

Mutants 1 and 3 do not change the output of the program, and are thus equivalent mutants.
The test suite does not detect them.

Mutant 2, however, is not an equivalent mutant. Test cases 1-4 will detect it through wrong output from the program. Test case 5 may have different behaviour on different machines. It may show up as bad output from the program, but at the same time, it may be visible as a program crash.

If we calculate the statistics, we see that we created three mutants and only one was killed.

Thus, the quality of the test suite = 1/3. As we can see, the number 1/3 is low. It is low because we generated two equivalent mutants. This number should serve as a warning that we are not testing enough. In fact, the program has two serious errors that should be detected by the test suite.


Kinds of Mutation

  • Value Mutation - these mutations involve changing the values of constants or parameters (by adding or subtracting values etc), e.g. loop bounds { being one out on the start or _nish is a very common error.
  • Decision Mutation - this involves modifying conditions to react potential slips and errors in the coding of conditions in programs, e.g. a typical mutation might be replacing a > by a < in a comparison.
  • Statement Mutations - these might involve deleting certain lines to reflect omissions in coding or swapping the order of lines of code. There are other operations, e.g. changing operations in arithmetic expressions. A typical omission might be to omit the increment on some variable in a while loop.

Benefits of Mutation testing

  • It provides the tester with a target. The tester has to develop a test data that is capable of killing all the generated mutants. Hence, we can generate an effective test data set that is powerful enough to find errors in the program.
  • Another advantage of mutation testing is that even if no error is found, it still gives the user information about the quality of the program tested.
  • Mutation testing makes a program less buggy and more reliable and increases the confidence in the working of the product – which is the bottom line of any software testing activity.

Protocol Testing


Have you ever thought that how two people chat face-to-face with each other irrespective of the distance between them?  Believe us, this is not a magic, it’s just the game of protocols. But “What is this protocol?????”

Protocol is the special set of rules that are followed in telecommunication network when two entities communicate with each other. Protocols specify interactions between the communicating entities. In fact protocol is an agreed-upon format for transmitting the data between two devices.

Importance of Protocol

Protocol plays a key role in today's communication world, without this it is not possible for one computer to communicate with another. Let’s have an example from daily life to understand the importance of protocol.
Just Imagine you are in 
France but you don't know how to speak French. Is it possible for you to talk to a man who doesn't know any other language except French?
Well certainly not!! Although you could try to communicate with him non verbally but again you'd find it difficult to let him know your thoughts and ideas; to get the ball rolling either you need to know French or he needs to know the language that you speak!!

From this example, it is apparent that if two human beings want to communicate then they must understand and speak in a common language.

This same analogy applies in modern communication systems; if equipment wants to receive information from, or send information to another, then both of them must use the same language to accomplish the task. This is where protocol comes into picture.  

“A Protocol is set of rules that govern communication between two or more equipments.”



Common protocols

Ø  IP (Internet Protocol)
Ø  UDP (User Datagram Protocol)
Ø  TCP (Transmission Control Protocol)
Ø  DHCP (Dynamic Host Configuration Protocol)
Ø  HTTP (Hypertext Transfer Protocol)
Ø  FTP (File Transfer Protocol)
Ø  Telnet (Telnet Remote Protocol)
Ø  SSH (Secure Shell Remote Protocol)
Ø  POP3 (Post Office Protocol 3)
Ø  SMTP (Simple Mail Transfer Protocol)
Ø  IMAP (Internet Message Access Protocol)
Ø  CDMA2000 1xRTT (CDMA 1x Radio Transmission Technology)


Testing Protocols

Product companies like Cisco, Nortel, Juniper, Alcatel, Huawei etc. have networking devices like routers, switches, modems, wireless access-point, firewalls. These devices use different protocols to communicate e.g. Cisco routers use EIGRP, OSPF etc. to exchange routing information. This implementation will certainly need some testing to ensure the communication is working fine through these protocols.

Protocol testing is the testing the functionality of any given protocol should function with respect to its RFC. Protocol testing involves testing functionality, protocol stack, interoperability, performance etc...

Usually protocol testing is done by connecting a DUT (Device under test) to other devices like routers/switches and configuring the protocol in it, then checking the packet structure of the packets send by the devices, checking protocol algorithm, their performance, scalability etc. using tools like wireshark, IxNetworks , Spirent etc. In general, protocol testers work by capturing the information exchanged between a Device under Test (DUT) and a reference device known to operate properly. In the example of a manufacturer producing a new keyboard for a personal computer, the Device under Test would be the keyboard and the reference device will be the PC. The information exchanged between the two devices is governed by rules set out in a technical specification called a "communication protocol". Both the nature of the communication and the actual data exchanged are defined by the specification. The captured information is decoded from raw digital form into a human-readable format that permits the protocol tester to easily review the exchanged information. Protocol testers vary in their abilities to display data in multiple views, automatically detect errors, determine the root causes of errors, generate timing diagrams, etc. Sometimes protocol testers may be required also generate protocol-correct traffic for functional testing, and may also have to deliberately introduce errors to test for the DUT's ability to deal with error conditions.

Since communication protocols are state-dependent (what should happen next depends on what previously happened), specifications are complex and the documents describing them can be hundreds of pages. This makes the job of protocol testing quite challenging.

Protocol testing is an essential step towards commercialization of standards-based products. It helps to ensure that products from different manufacturers will operate together properly ("interoperate") and so satisfy customer expectations. This type of testing is imperative for new emerging communication technologies.

Penetration Testing [Breaking-IN before the Bad Guys]



Have you ever been told by someone, what the password of your mail id is, and you are stunned by the fact that which personal mail has he read in your mailbox. But believe this is no longer just “script kiddies” breaking into your network and is more severe in a multi-tier network architectures, Web services, custom applications, and heterogeneous server platform environments, In the past several years, it has become apparent that there is real money to be made from criminal hacking, and identity theft is one of the world’s fastest growing problems.
Although there are many ways to secure systems and applications, but being in the shoes of a hacker is a complete new way to test yourself, by PENETRATION TESTING you can actually replicate the types of actions a malicious attacker would take.
Penetration testing has evolved from being ad hoc, to a robust and trustworthy testing methodology with the use of high quality commercial tools. In the hands of a properly trained penetration tester ,these new testing methodologies  provides a stable, quality-assured testing tool that can be used to accurately assess systems by penetrating existing vulnerability.

Let’s define Penetration Testing:
  
Penetration testing is a process of accessing your overall security before hackers do. It is a testing technique for discovering, understanding, and documenting all the security holes that can be found in a system.
The person who makes an attempt to gain access to resources without knowledge of usernames/passwords is identified as a hacker/attacker however the person who does it officially (with pre authorization) is identified as Penetration tester. In other words unauthorized attackers are hackers and authorized attackers are penetration testers.

A penetration tester must act as a hacker/attacker while doing penetration testing. It’s important to understand that it can never prove the absence of security flaws. It can only prove their presence.

Why Penetration Testing:

Ask yourself, do you want your application to be attacked by hackers? So attack it yourself.


Aspects of Penetration Testing -

*       Find Holes Now Before Somebody Else Does
The goal is that the penetration tester will find ways into the network so that they can be fixed before someone with less than honorable intentions discovers the same holes. We can think of a Penetration Test as annual medical physical checks where even if you believe you are healthy, your physician will run a series of tests (some old and some new) to detect dangers that have not yet developed symptoms.
*       Report Problems to Management
Penetration testing result helps to justify the lack of security in the environment to the upper level management.  Often an internal network team will be aware of weaknesses in the security of their systems but will have trouble getting management to support the changes that would be necessary to secure the system. By having an outside group with a reputation for security expertise analyzes a system; management will often respect that opinion more. Remember that ultimate responsibility for the security of IT assets rests with Management because it is they, not the administrators, who decide what the acceptable level of risk is for the organization.
*       Verify Secure Configurations
If the CSO (or security team) are confident in their actions and final results, the penetration test report verifies that they are doing a good job. The penetration test doesn’t make the network more secure, but it does identify gaps between knowledge and implementation.
*       Security Training For Network Staff
Penetration testing gives security people a chance to recognize and respond to a network attack. For example, if the penetration tester successfully compromises a system without anyone knowing, this could be indicative of a failure to adequately train staff on proper security monitoring.
*       Discover Gaps in Compliance
Using penetration testing as a means to identify gaps in compliance is a bit closer to auditing than true security engineering, but experienced penetration testers often breach a perimeter because someone did not get all the machines patched, or possibly because a non-compliant machine was put up “temporarily” and ended up becoming a critical resource.
*       Testing New Technology
The ideal time to test new technology is before it goes into production.  As it can often save time and money because it is easier to test and modify new technology while nobody is relying on it.

How do we perform penetration testing?
Although there are various methodologies that a penetration tester can follow, there are broadly 4 main phases:


4 Stage Penetration Testing Methodology
  • Planning - The planning phase is where the scope for the assignment is defined. Management approvals, documents and agreements etc. are signed. The penetration testing team prepares a definite strategy for the assignment.
  • Discovery - The discovery phase is where the actual testing starts; it can be regarded as an information gathering phase. This phase can be further categorized as follows:
    • Foot-printing phase - to get the maximum possible information available about the target organization and its systems using various means, both technical as well as non-technical. This involves searching the internet, querying various public repositories (databases, domain registrars, Usenet groups, mailing lists, etc.).
    • Scanning and Enumeration phase - The identifying live systems, open / filtered ports found, services running on these ports, mapping router / firewall rules, identifying the operating system details, network path discovery, etc.
    • Vulnerability Analysis phase - find any possible vulnerabilities existing in each target system. During this phase a penetration tester may use automated tools to scan the target systems for known vulnerabilities. These tools will usually have their own databases consisting of latest vulnerabilities and their details.
  • Attack - This is the phase that separates the Men from the Boys. This is at the heart of any penetration test, the most interesting and challenging phase.
This phase can be further categorized into:
    • Exploitation phase - During this phase a penetration tester will try to find exploits for the various vulnerabilities found in the previous phase.
    • Privilege Escalation phase - There are times when a successful exploit does not lead to root access. An effort has to be made at such point to carry further analysis on the target system to gain more information that could lead to getting administrative privileges, e.g. local vulnerabilities, etc.
Reporting -This stage can occur in parallel to the other three stages or at the end of the Attack stage. Many penetration testers do not concentrate on this stage and follow a hurried approach to make all the submissions. But this stage is probably the most important of all the phases, after all the organization is paying you for this final document.

Mobile App Testing – The Challenges


There is no denying that mobile phones, smart phones in particular, are in vogue these days. Gradually, businesses across various sectors are shifting focus towards mobile applications because users are more interested in browsing the web on their pocket size devices rather than viewing it from bulky PC’s. The tremendous growth of mobile users has opened up a new market which is flooded with various mobile platforms and devices and thousands of applications to run on them.
Imagine if the alarm on your phone didn’t go off in the morning, your old texts suddenly went missing, or you couldn’t make that important call. This dreadful thought actually sums up the importance of mobile application testing.
Below are some points which explain in brief that why special skills are required in Mobile application testing.

1) Diversity in Mobile Device platforms
Android, Blackberry, Nokia’s Symbian and Apple’s iPhone have together grabbed a large part of the Smartphone market. But these are not the only ones; there are many other platforms that are in use like BREW, BREWMP, Windows 7, etc. While testing any multiplatform mobile application, it would be required to test it on each platform while carrying out UI testing, functional testing etc. This poses a challenge as many of these mobile platforms when triggered by a common thing might behave in a different manner and also that the platform may have some limitations as well.
 2) Diversity of the Mobile Devices
There is a huge variety of mobile devices available in market with different screen sizes, different input methods like touch screen, QWERTY keypads, trackball and each of them having different hardware capabilities. Mobile devices also have different application runtimes like Binary Runtime Environment for Wireless Java, etc. Some mobile devices communicate through WAP and some use HTTP for communication. Thus extensive testing of mobile application is important to ensure compatibility of the application with devices having a variety of the above characteristics.
 3) Diversity in Hardware Configuration
Apart from diversity in platform and mobile devices, there is diversity in their hardware also. The mobile devices come with various processors, RAM, internal memory, various sensors like proximity sensor, Accelerometer sensor, GPS, Gyroscope etc. There are many challenges of diverse hardware configurations like Mobile environment provides lesser memory and processing power for computing as compared to PC which reduces processing speed and causes variations in performance of applications. Therefore, exhaustive testing of mobile applications is required to deliver optimum performance for all desired configurations of hardware.
4) Diversity in Network
We know there is always unpredictability in network latency when applications communicate over network boundaries, leading to inconsistent data transfer speeds. It demands testing to measure the performance of applications for various network bandwidths of various service providers. Wireless network use data optimizers like gateways to deliver content and it may result in decreased performance in case of heavy traffic.  Therefore, testing should be performed to determine the network traffic level at which gateway capabilities will impact the performance of the mobile application.
The above mentioned challenges are just a few of the lot, but should be enough to emphasize the need of thorough and diversified testing of Mobile applications using specialized skills.