Data Flow Testing In Software Testing Pdf

NIST Special Publication 500235 document describing the structured testing methodology for software testing, also known as basis path testing. Based on the. Data Flow Testing In Software Testing PdfTo know with the basic definitions of software testing and quality assurance this is the best glossary compiled by Erik van Veenendaal. Also for each definition there. Software Testing Methods Learn Software Testing in simple and easy steps starting from its Overview, Myths, QA, QC Testing, ISO Standards, Types of Testing. In my previous post I have outlined points to be considered while testing web applications. Now we will see some more details about web application testing with web. Testing Services Viscosity, Texture Analysis and Powder Flow. Brookfields stateoftheart laboratory offers a variety of viscosity, texture analysis and powder. SIMPLIFY FIRE PUMP TESTING. The calculations of fire pump flows and the graphing of fire pump performance can require a relatively high amount of technical knowledge. Software Acceptance Testing Learning all terminologies related to Software testing. In other words, learn Software test life cycle, different types of testing. Rapidshare Asterisk Cookbook. Innovative software testing solutions tools and services for automated and manual testing of application software, Web sites, middleware, and system software. Data flow analysis Wikipedia. Data flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A programs control flow graph CFG is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. OAK-TAmodel.gif' alt='Data Flow Testing In Software Testing Pdf' title='Data Flow Testing In Software Testing Pdf' />Data Flow Testing In Software Testing PdfData Flow Testing In Software Testing PdfA canonical example of a data flow analysis is reaching definitions. A simple way to perform data flow analysis of programs is to set up data flow equations for each node of the control flow graph and solve them by repeatedly calculating the output from the input locally at each node until the whole system stabilizes, i. This general approach was developed by Gary Kildall while teaching at the Naval Postgraduate School. Basic principleseditIt is the process of collecting information about the way the variables are used, defined in the program. Data flow analysis attempts to obtain particular information at each point in a procedure. Usually, it is enough to obtain this information at the boundaries of basic blocks, since from that it is easy to compute the information at points in the basic block. In forward flow analysis, the exit state of a block is a function of the blocks entry state. This function is the composition of the effects of the statements in the block. The entry state of a block is a function of the exit states of its predecessors. This yields a set of data flow equations For each block b outbtransbinbdisplaystyle outbtransbinbinbjoinppredboutpdisplaystyle inbjoinpin predboutpIn this, transbdisplaystyle transb is the transfer function of the block bdisplaystyle b. It works on the entry state inbdisplaystyle inb, yielding the exit state outbdisplaystyle outb. The join operationjoindisplaystyle join combines the exit states of the predecessors ppredbdisplaystyle pin predb of bdisplaystyle b, yielding the entry state of bdisplaystyle b. After solving this set of equations, the entry andor exit states of the blocks can be used to derive properties of the program at the block boundaries. The transfer function of each statement separately can be applied to get information at a point inside a basic block. Each particular type of data flow analysis has its own specific transfer function and join operation. Some data flow problems require backward flow analysis. This follows the same plan, except that the transfer function is applied to the exit state yielding the entry state, and the join operation works on the entry states of the successors to yield the exit state. The entry point in forward flow plays an important role Since it has no predecessors, its entry state is well defined at the start of the analysis. For instance, the set of local variables with known values is empty. If the control flow graph does not contain cycles there were no explicit or implicit loops in the procedure solving the equations is straightforward. The control flow graph can then be topologically sorted running in the order of this sort, the entry states can be computed at the start of each block, since all predecessors of that block have already been processed, so their exit states are available. If the control flow graph does contain cycles, a more advanced algorithm is required. An iterative algorithmeditThe most common way of solving the data flow equations is by using an iterative algorithm. It starts with an approximation of the in state of each block. The out states are then computed by applying the transfer functions on the in states. From these, the in states are updated by applying the join operations. The latter two steps are repeated until we reach the so called fixpoint the situation in which the in states and the out states in consequence do not change. A basic algorithm for solving data flow equations is the round robin iterative algorithm for i 1 to Ninitialize node iwhile sets are still changing. Nrecompute sets at node i. ConvergenceeditTo be usable, the iterative approach should actually reach a fixpoint. This can be guaranteed by imposing constraints on the combination of the value domain of the states, the transfer functions and the join operation. The value domain should be a partial order with finite height i. The combination of the transfer function and the join operation should be monotonic with respect to this partial order. Monotonicity ensures that on each iteration the value will either stay the same or will grow larger, while finite height ensures that it cannot grow indefinitely. Thus we will ultimately reach a situation where Tx x for all x, which is the fixpoint. The work list approacheditIt is easy to improve on the algorithm above by noticing that the in state of a block will not change if the out states of its predecessors dont change. Therefore, we introduce a work list a list of blocks that still need to be processed. Whenever the out state of a block changes, we add its successors to the work list. In each iteration, a block is removed from the work list. Its out state is computed. If the out state changed, the blocks successors are added to the work list. For efficiency, a block should not be in the work list more than once. The algorithm is started by putting information generating blocks in the work list. It terminates when the work list is empty. The order matterseditThe efficiency of iteratively solving data flow equations is influenced by the order at which local nodes are visited. Furthermore, it depends on whether the data flow equations are used for forward or backward data flow analysis over the CFG. Intuitively, in a forward flow problem, it would be fastest if all predecessors of a block have been processed before the block itself, since then the iteration will use the latest information. In the absence of loops it is possible to order the blocks in such a way that the correct out states are computed by processing each block only once. In the following, a few iteration orders for solving data flow equations are discussed a related concept to iteration order of a CFG is tree traversal of a tree. Random order This iteration order is not aware whether the data flow equations solve a forward or backward data flow problem. Therefore, the performance is relatively poor compared to specialized iteration orders. Postorder This is a typical iteration order for backward data flow problems. In postorder iteration, a node is visited after all its successor nodes have been visited. Typically, the postorder iteration is implemented with the depth first strategy. Reverse postorder This is a typical iteration order for forward data flow problems. In reverse postorder iteration, a node is visited before any of its successor nodes has been visited, except when the successor is reached by a back edge. Note that this is not the same as preorder. InitializationeditThe initial value of the in states is important to obtain correct and accurate results. If the results are used for compiler optimizations, they should provide conservative information, i. The iteration of the fixpoint algorithm will take the values in the direction of the maximum element. Initializing all blocks with the maximum element is therefore not useful. At least one block starts in a state with a value less than the maximum. The details depend on the data flow problem.