Abstract:
A system, method, and computer program product are provided for automated exploratory testing. In use, a plurality of actions to be performed as a test flow in an exploratory test associated with at least one testing project are identified. Additionally, a plurality of additional options are identified for each performed action of the plurality of actions of the test flow that are capable of being performed instead of one or more of the plurality of actions in the test flow. Further, a graph is generated showing all combinations of the plurality of first actions and the plurality of additional options as a possible scope of the exploratory test associated with the at least one testing project. In addition, the graph is modified based on received input, the received input identifying one or more test flows to execute as the exploratory test associated with the at least one testing project. Still yet, the exploratory test associated with the at least one testing project is automatically executed in accordance with the modified graph based on the received input. Moreover, a status of the automatic execution of the exploratory test associated with the at least one testing project is reported utilizing the graph.
Abstract:
During execution of a computer program, mouse movements, keyboard inputs, and screen snapshots are recorded and stored in one or more files as a test flow. Next, selected recorded keyboard inputs are replaced with user-specified variable parameters to generate a keyboard testing input, each of the parameters corresponding to a plurality of possible keyboard inputs. Execution is triggered of the test flow including the recorded mouse movements, the recorded screenshots, and the keyboard testing input. If the initially displayed screen is not equivalent to the first screen indicted in the test flow as being the start of the test, the test flow is stopped. Otherwise, the test flow is executed utilizing a random selection of the plurality of possible keyboard inputs of the keyboard testing input. At least one output is provided for the execution of the test flow.
Abstract:
A system, method, and computer program product are provided for generating a fully traceable test design. In use, a repository of parameters and associated values that are predefined as valid for the parameters is defined. Activity flows including one or more activities are further graphically defined, and the parameters are mapped to the one or more activities, the mapping functioning to connect the one or more activities to the values that are predefined as valid for the parameters. Further, business rules define incompatible pairings of the values across two or more of the parameters mapped to one or more of the activities. A plurality of test scenarios associated with the activity flows are then determined, and a subset of the plurality of test scenarios are automatically selected based on various predefined criteria. Moreover, test design materials associated with the test design are output.
Abstract:
A system, method, and computer program product are provided. At least one testing project to be performed is identified, and a diagram is generated from testing activities including parameters with multiple values which includes one or more test flows including the testing activities. The one or more test flows include a plurality of possible testing scenarios. Further, scenarios are extracted from the generated diagram, and a test list to be executed is generated utilizing the extracted scenarios where each test case in the test list retains a link to a corresponding testing activity in the generated diagram. Still yet, the test list is, and at least one report is generated based on the execution of the test list which shows the generated diagram and a result of the execution corresponding to a testing activity based on an associated retained link.
Abstract:
A system, method, and computer program product are provided for software testing project design and execution utilizing a mockup. In use, at least one software testing project to design is identified. Additionally, at least one mockup of the at least one software testing project is generated. Further, one or more testable items associated with the at least one mockup are defined. In addition, one or more test cases associated with the at least one mockup are generated. Furthermore, the one or more test cases are linked to the one or more testable items. Moreover, the at least one mockup including the one or more test cases linked to the one or more testable items is displayed. In one embodiment, a status of the one or more test cases may be updated according to a test execution within the at least one software testing project. Furthermore, at least one defect may be linked to every mockup that showed one or more errors within and/or outside a defined testable item.
Abstract:
A system, method, and computer program product are provided for centralized guided testing. In use, at least one software testing project is identified. Additionally, data associated with the at least one software testing project is accessed from at least one of a plurality of knowledge repositories that are capable of being dynamically and constantly updated, the plurality of knowledge repositories including: at least one first repository including official testing methodology associated with a plurality of testing processes; at least one second repository including test project management information; at least one third repository including test knowledge information provided by users; and at least one fourth repository including historical testing project information and ongoing testing project information. Further, the data associated with the at least one software testing project is presented utilizing at least one user interface.
Abstract:
A system, method, and computer program product are provided for automatic database validation associated with a software test. In use, an indication that a user is beginning a software test that utilizes one or more databases is received. A first configuration snapshot of the one or more databases is recorded in response to receiving the indication that the user is beginning the software test, prior to the user beginning the software test. Additionally, an indication that the user has finished the software test is received. A second configuration snapshot of the one or more databases is recorded in response to receiving the indication that the user has finished the software test. The first configuration snapshot of the one or more databases is automatically compared to the second configuration snapshot of the one or more databases. Further, changes that occurred in the one or more databases resulting from the software test are automatically identified, based on the comparing of the first configuration snapshot of the one or more databases to the second configuration snapshot of the one or more databases. The changes that occurred in the one or more databases resulting from the software test are displayed utilizing at least one user interface. Still yet, the changes that occurred in the one or more databases resulting from the software test are automatically compared to past changes that occurred in the one or more databases resulting from a past software test. A difference in the changes that occurred in the one or more databases resulting from the software test and the past changes that occurred in the one or more databases resulting from the past software test is automatically identified, based on comparing the changes that occurred in the one or more databases resulting from the software test and the past changes that occurred in the one or more databases resulting from the past software test. Moreover, an indication of the difference in the changes that occurred in the one or more databases resulting from the software test and the past changes that occurred in the one or more databases resulting from the past software test is displayed utilizing the at least one user interface.
Abstract:
A system, method, and computer program product are provided for generating a detailed design of at least one telecommunications based integration testing project. In use, a scope of at least one integration testing project is analyzed. Additionally, vendor-related information associated with the at least one integration testing project is tracked. Further, an activity library associated with the at least one integration testing project is generated. In addition, scenarios associated with the at least one integration testing project are determined. Furthermore, a high level design of the at least one integration testing project is presented for review. Still yet, testing instructions are generated based on the scenarios associated with the at least one integration testing project. Moreover, a detailed design of the at least one integration testing project is generated utilizing the testing instructions and the activity library.
Abstract:
A system, method, and computer program product are provided for automatic high level testing project planning. In use, information associated with at least one testing project to be planned is received, the information including a plurality of project attributes associated with the at least one testing project. Additionally, one or more test planning rules are identified based on the received information, the one or more rules including rules generated utilizing data associated with a plurality of previously performed testing projects. Further, one or more test planning conclusions applicable for the at least one testing project are determined based on the one or more test planning rules and the received information. Moreover, the one or more test planning conclusions are output utilizing at least one user interface.
Abstract:
A system, method, and computer program product are provided for calculating risk associated with a software testing project. In use, a plurality of inputs associated with at least one software testing project are received. Additionally, risk elements are identified utilizing the plurality of inputs. Further, a weight is assigned to each of the identified risk elements, the weight capable of being adjusted based on user feedback. Moreover, an overall risk is calculated for the at least one software testing project based on the identified risk elements and assigned weights.