The Automated Testing Handbook Essay

Category: Publication,
Published: 27.01.2020 | Words: 11326 | Views: 482
Download now

The Automated Testing Handbook About the Author Introduction Why automate?

When not to automate How not to automate Setting realistic expectations Getting and keeping management commitment Terminology Fundamentals of Test Automation Maintainability Optimization Independence Modularity Context Synchronization Documentation The Test Framework Common functions Standard tests Test templates Application Map Test Library Management Change Control Version Control Configuration Management 1 3 3 4 8 9 10 15 17 19 20 22 23 25 26 29 30 32 32 37 39 41 44 44 45 46 Selecting a Test Automation Approach Capture/Playback Structure Advantages Disadvantages Comparison Considerations Data Considerations Data-Driven Structure Advantages Disadvantages Data Considerations Table-Driven Structure Advantages Disadvantages The Test Automation Process The Test Team Test Automation Plan Planning the Test Cycle Test Suite Design Test Cycle Design Test Execution Test log Error log Analyzing Results Inaccurate results Defect tracking Test Metrics Management Reporting Historical trends 48 50 51 52 52 55 57 58 60 61 62 63 64 65 66 69 70 70 73 76 77 79 81 81 84 85 85 87 88 95 97 Page 2? The Automated Testing Handbook About the Author? Page 3 Since software testing is a labor-intensive task, especially if done thoroughly, automation sounds instantly appealing.

Need help writing essays?
Free Essays
For only $5.90/page
Order Now

But, as with anything, there is a cost associated with getting the benefits. Automation isn’t always a good idea, and sometimes manual testing is out of the question. The key is to know what the benefits and costs really are, then to make an informed decision about what is best for your circumstances. The unfortunate fact is that many test automation projects fail, even after significant expenditures of time, money and resources. The goal of this book is to improve your chances of being among the successful.

While it might be costly to be late to the market, it can be catastrophic to deliver a defective product. Software failures can cost millions or even billions, and in some cases entire companies have been lost. So if you don’t have enough people or time to perform adequate testing to begin with, adding automation will not reduce software instability and errors. Since it is welldocumented that software errors even a single one can cost millions more than your entire testing budget, the first priority should be first to deliver reliable software. Once that is achieved, then focus on optimizing the time and costs.

In other words, if your software doesn’t work, it doesn’t matter how fast or cheap you deliver it. Page 4? The Automated Testing Handbook Automated delivers software tests provide three key benefits: cumulative coverage to detect errors and reduce the cost of failure, repeatabililty to save time and reduce the cost to market, and leverage to improve resource productivity. But realize that the test cycle will be tight to begin with, so don’t count on automation to shorten it count on it to help you meet the deadline with a reliable product.

By increasing your coverage and thus reducing the probability of failure, automation can help to avoid the costs of support and rework, as well as potentially devastating costs. Cumulative coverage It is a fact that applications change and gain complexity over their useful life. As depicted in the figure below, the feature set of an application grows steadily over time.

Therefore, the number of tests that are needed for adequate coverage is also constantly increasing. Just a 10% code change still requires that 100% of the features be tested. That is why manual testing can’t keep up unless you constantly increase test resources and cycle time, your test coverage will constantly decline. Automation can help this by allowing you to accumulate your test cases over the life of the application so that both existing and new features can always be tested. Ironically, when test time is short, testers will often sacrifice regression testing in favor of testing new features.

The irony is that Introduction? Page 5 the greatest risk to the user is in the existing features, not the new ones! If something the customer is already doing stops working or worse, starts doing the wrong thing then you could halt operations. The loss of a new feature may be inconvenient or even embarrassing, but it is unlikely to be devastating. But this benefit will be lost if the automated tests are not designed to be maintainable as the application changes.

If they either have to be rewritten or require significant modifications to be reused, you will keep starting over instead of building on prior efforts. Therefore, it is essential to adopt an approach to test library design that supports maintainability over the life of the application. Leverage True leverage from automated tests comes not only from repeating a test that was captured while performed manually, but from executing tests that were never performed manually at all. For example, by generating test cases programmatically, you could yield thousands or more when only hundreds might be possible with manual resources.

Enjoying this benefit requires the proper test case and script design to allow you to take advantage of external data files and other constructs. Faster time to market Because software has become a competitive weapon, time to market may be one of the key drivers for a project. In some cases, time is worth more than money, especially if it means releasing a new product or service that generates revenue. Automation can help reduce time to market by allowing test execution to happen 24X7. Once the test library is automated, execution is faster and run longer than manual testing.

Of course, this benefit is only available once your tests are automated. Reduced cost Software is used for high risk, mission critical applications that Page 6? The Automated Testing Handbook of failure represent revenue and productivity. A single failure could cost more than the entire testing budget for the next century! In one case a single bug resulted in costs of almost $2 billion. The national department of standards and technology estimates the cost of correcting defects at $59.5 billion a year, and USA Today claims a $100 billion annual cost to the US economy.

Automation can reduce the cost of failure by allowing increased coverage so that errors are uncovered before they have a chance to do real damage in production. Notice what was NOT listed as a benefit: reduced testing resources. The sad fact is that most test teams are understaffed already, and it makes no sense to try to reduce an already slim team. Instead, focus on getting a good job done with the time and resources you have.

In this Handbook we will present practical advice on how to realize these benefits while keeping your expectations realistic and your management committed. Introduction? Page 7 Page 8? The Automated Testing Handbook If you have inexperienced testers who are new to the team, they make the best manual testers because they will likely make the same mistakes that users will.

Save automation for the experts. Temporary testers In other cases, the test team may be comprised primarily of personnel from other areas, such as users or consultants, who will not be involved over the long term. It is not at all uncommon to have a testfest where other departments contribute to the test effort. But because of the initial investment in training people to use the test tools and follow your library design, and the short payback period of their brief tenure, it is probably not time or cost effective to automate with a temporary team.

Again, let them provide manual test support while permanent staff handles automation. Insufficient time, resources If you don’t have enough time or resources to get your testing done manually in the short term, don’t expect a tool to help you. The initial investment for planning, training and implementation will take more time in the short term than the tool can save you. Get through the current crisis, then look at automation for the longer term. Keep in mind that automation is a strategic solution, not a short term fix.

Automation is more than capture/replay If you acquired a test tool with the idea that all you have to do is record and playback the tests, you are due for disappointment. Although it is the most commonly recognized technique, capture/replay is not the most successful approach. As discussed in a later chapter, Selecting an Automation Approach, capture and replay does not result in a test library that is robust, maintainable or transferable as changes occur. Don’t write a program to test a program! The other extreme from capture/replay is pure programming.

But if you automate your tests by trying to write scripts that anticipate the behavior of the underlying program and provide for each potential response, you will essentially end up developing a mirror version of the application under test! Where will it end? Who tests the tests?

Although appealing to some, this strategy is doomed no one has the time or resources to develop two complete systems. Ironically, developing an automated test library that provides comprehensive coverage would require more code than exists in the application itself! This is because tests must account for positive, negative, and otherwise invalid cases for each feature or function.

Automation is more than test execution So if it isn’t capture/replay and it isn’t pure programming, what is it? Think of it this way. You are going to build an application that automates your testing, which is actually more than just running the tests. You need a complete process and environment for creating and documenting tests, managing and maintaining them, executing them and reporting the results, as well as managing the test environment.

Just developing scores of individual tests does not comprise a strategic test automation system. Duplication of effort The problem is, if you just hand an automation tool out to individual testers and command that they automate their tests, each one of them will address all of these issues in their own unique and Page 10? The Automated Testing Handbook personal way, of course. This leads to tremendous duplication of effort and can cause conflict when the tests are combined, as they must be.

Automation is more than test execution So if it isn’t capture/replay and it isn’t pure programming, what is it? Think of it this way. You are going to build an application that automates your testing, which is actually more than just running the tests. You need a complete process and environment for creating and documenting tests, managing and maintaining them, executing them and reporting the results, as well as managing the test environment.

Just developing scores of individual tests does not comprise a strategic test automation system. Need for a framework Instead, approach the automation of testing just as you would the automation of any application with an overall framework, and an orderly division of the responsibilities. This framework should make the test environment efficient to develop, manage and maintain. How to develop a framework and select the best automation approach are the focus of this handbook. Remember, test tools aren’t magic but, properly implemented, they can work wonders!

Introduction? Page 11 There are three important things to remember when setting expectations about test automation: one, an initial as well as ongoing investment in planning, training and development must be made before any benefits are possible; two, the time savings come only when automated tests can be executed more than once, by more than one person, and without undue maintenance requirements; three, no tool can compensate for the lack of expertise in the test process. Test automation is strategic If your test process is in crisis and management wants to throw money at a tool to fix it, don’t fall for it. Test automation is a long term, strategic solution, not a short term band-aid. Buying a test tool is like joining a health club: the only weight you have lost is in your wallet!

You must use the club, sweat it out and invest the time and effort before you can get the benefits. Use consultants wisely Along the same lines, be wary about expecting outside consultants to solve your problems. Although consultants can save you time by bringing experience to bear, they are not in and of themselves a solution. Think of consultants as you would a personal trainer: they are there to guide you through your exercises, not to do them for you! Paying someone else to do your situps for you will not flatten your stomach.?

Page 12? The Automated Testing Handbook different. For intensive manual test processes of stable applications, you may see an even faster payback.

Not everything can be automated But remember, you must still allow time for tasks that can’t be automated you will still need to gather test requirements, define test cases, maintain your test library, administer the test environment, and review and analyze the test results. On an ongoing basis you will also need time to add new test cases based on enhancements or defects, so that your coverage can constantly be improving. Accept gradual progress If you can’t afford the time in the short term, then do your automation gradually. Target those areas where you will get the biggest payback first, then reinvest the time savings in additional areas until you get it all automated.

Some progress is better than non-e! Plan to keep staff As pointed out earlier, don’t plan to jettison the majority of your testing staff just because you have a tool. In most cases, you don’t have enough testers to begin with: automation can help the staff you have be more productive, but it can’t work miracles.

Granted, you may be able to reduce your dependence on temporary assistance from other departments or from contractors, but justifying testing tools based on reducing staffing requirements is risky, and it misses the point. The primary goal of automation should be to increase test coverage, not to cut testing costs. A single failure in some systems can cost more than the entire testing budget for the next millennia. The goal is not to trim an already slim testing staff, it is to reduce the risk and cost of software failure by expanding coverage. Introduction?

Page 13 Reinvest time savings As your test automation starts to reap returns in the form of time savings, don’t automatically start shaving the schedule. The odds are that there are other types of tests that you never had time for before, such as configuration and stress testing. If you can free up room in the schedule, look for ways to test at high volumes of users and transactions, or consider testing different platform configurations. Testing is never over!?

When setting expectations, ask yourself this question: Am I satisfied with everything about our existing test process, except for the amount of time it takes to perform manually? If the answer is yes, then automation will probably deliver like a dream. But if the answer is no, then realize that while automation can offer great improvements, it is not a panacea for all quality and testing problems.

The most important thing to remember about setting expectations is that you will be measured by them. If you promise management that a testing tool will cut your testing costs in half, yet you only succeed in saving a fourth, you will have failed! So take a more conservative approach: be up front about the initial investment that is required, and offer cautious estimates about future savings.

In many cases, management can be satisfied with far less than you might be. For example, even if you only break even between the cost to automate and the related savings in direct costs, if you can show increased test coverage then there will be a savings in indirect costs as a result of improved quality. In many companies, better quality is more important than lower testing costs, because of the savings in other areas: failures can impact revenues, drive up support and development costs, and reduce customer confidence. Page 14? The Automated Testing Handbook Introduction?

Page 15 Page 16? The Automated Testing Handbook Adjust as you go If one of your assumptions changes, adjust the schedule and expectations accordingly and let management know right away. For example, if the application is not ready when expected, or if you lose resources, recast your original estimates and inform everyone concerned.

Don’t wait until you are going to be late to start explaining why. No one likes surprises! Plan for the long term Be sure to keep focus on the fact that the test automation project will last as long as the application under test is being maintained.

Achieving automation is not a sprint, it is a long distance run. Just as you are never through developing an application that is being actively used, the same applies to the test library. Introduction? Page 17 Test Case A test case is a set of inputs and expected application response that will confirm that a requirement has been met. Depending on the automation approach adopted, a test case may be stored as one or more data records, or may be stored within a test script.

Test Script A test script is a series of commands or events stored in a script language file that execute a test case and report the results. Like a program, a test script may contain logical decisions that affect the execution of the script, creating multiple possible pathways. Also, depending on the automation approach adopted, it may contain constant values or variables whose values change during playback. The automation approach will also dictate the degree of technical proficiency required to develop the test script. Test Cycle A test cycle is a set of individual tests that are executed as a package, in a particular sequence.

Cycles are usually related to application operating cycles, or by the area of the application they exercise, or by their priority or content. For example, you may have a build verification cycle that is used to establish acceptance of a new software build, as well as a regression cycle to assure that previous functionality has not been disrupted be changes or new features. Test Schedule A test schedule consists of a series of test cycles and comprises a complete execution set, from the initial setup of the test environment through reporting and cleanup. Page 18?

The Automated Testing Handbook Fundamentals of Test Automation? Page 19 disciplines accounting or any other business function: in each case, a computer is being instructed to perform a task previously performed manually. Whether these instructions are stored in something called a script or a program, they both have all of the characteristics of source code. Test Application expertise What to test Test Cases Automation Development expertise How to automate Test scripts The fact that testware is software is the single most important concept to grasp!

Once this premise is understood, others follow. Page 20? The Automated Testing Handbook automated.? In most cases, the application source code will be managed by a source control or configuration management system.

These systems maintain detailed change logs that document areas of change to the source code. If you can’t get information directly from development about changes to the application, ask to be copied on the change log. This will at least give you an early warning that changes are coming your way and which modules are affected.

Crossreference tests to the application Identifying needed changes is accomplished by cross-referencing testware components to the application under test, using consistent naming standards and conventions. For example, by using a consistent name for the same window throughout the test library, when it changes each test case and test script which refers to it can be easily located and evaluated for potential modifications. These names and their usage is is described more fully in the section on the Application Map. Design to avoid regression Fundamentals of Test Automation?

Page 21 Maintainability can be designed into your test cases and scripts by adopting and adhering to an overall test framework, discussed in the next section. Page 22? The Automated Testing Handbook Requirements measure readiness Once you have them, requirements can be assigned priorities and used to measure readiness for release. Having requirements tied to tests also reduces confusion about which requirements have been satisfied or failed based on the results of the test, thus simplifying the test and error log reports. Unless you know what requirements have been proven, you don’t really know whether the application is suitable for release.?

A requirements matrix is a handy way of keeping track of which requirements have an associated test. A requirement that has too many tests may be too broadly defined, and should be broken down into separate instances, or it may simply have more tests than are needed to get the job done. Conversely, a test that is associated with too many requirements may be too complex and should be broken down into smaller, separate tests that are more targeted to specific requirements. There are tools available that will generate test cases based on your requirements. There are two primary approaches: one that is based on addressing all possible combinations, and one that is based on addressing the minimum possible combinations.

Using the former method, requirements are easier to define because interdependencies are not as critical, but the number of tests generated is greater. The latter method produces fewer tests, but requires a more sophisticated means of defining requirements so that relationships among them are stated with the mathematical precision needed to optimize the number of tests. Independence Fundamentals of Test Automation?

Page 23 Independence refers to the degree to which each test case stands alone. That is, does the success or failure of one test case depend on another, and if so what is the impact of the sequence of execution? This is an issue because it may be necessary or desirable to execute less than all of the test cases within a given execution cycle; if dependencies exist, then planning the order of execution becomes more complex.

Independent data Independence is most easily accomplished if each test case verifies at least one feature or function by itself and without reference to other tests. This can be a problem where the state of the data is key to the test. For example, a test case that exercises the delete capability of a record in a file should not depend on a previous test case that creates the record; otherwise, if the previous test is not executed, or fails to execute properly, then the later test will also fail because the record will not be available for deletion. In this case, either the beginning state of the database should contain the necessary record, or the test that deletes the record should first add it. Independent context Independence is also needed where application context is concerned.

For example, one test is expected to commence at a particular location, but it relies on a previous test to navigate through the application to that point. Again, if the first test is not successfully executed, the second test could fail for the wrong reason. Your test framework should give consideration to selecting common entry and exit points to areas of the application. and assuring that related tests begin and end at one of them. Result independence It is also risky for one test case to depend on the successful result of another.

For example, a test case that does not expect an error message should provide assurance that, in fact, no message was issued. If one is found, steps should be added to clear the message. Otherwise, the next test case may expect the application to be ready for input when in fact it is in an error status. Page 24?

The Automated Testing Handbook If proper attention is paid to independence, the test execution cycle will be greatly simplified. In those cases where total independence is not possible or desirable, then be certain that the dependencies are well documented; the sequence, for example, might be incorporated into the naming conventions for test cases (ADD RECORD 01,ADD RECORD 02, etc.). Fundamentals of Test Automation? Page 25 Page 26?

The Automated Testing Handbook The Main menu approach The simplest solution to beginning and ending context is to design all tests to begin and end at the same point in the application. This point must be one from which any area of the application can be accessed. In most cases, this will be the main menu or SIGNON area. By designing every test so that it commences at this point and ends there, tests can be executed in any order without considering context.

Enabling error recovery Adopting a standard starting and ending context also simplifies recovery from unexpected results. A test which fails can, after logging its error, call a common recovery function to return context to the proper location so that the next test can be executed. Granted, some applications are so complex that a single point of context may make each individual test too long; in these cases, you may adopt several, such as sub-menus or other intermediate points. But be aware that your recovery function will become more complex, as it must have sufficient logic to know which context is appropriate. Designing test suites, or combinations of tests, will also be more complex as consideration must be given to grouping tests which share common contexts.

Fundamentals of Test Automation? Page 27 The key to context is to remember that your automated tests do not have the advantage that you have as a manual tester: they cannot make judgment calls about what to do next. Without consistency or logic to guide them, automated tests are susceptible to the slightest aberration. By proper test design, you can minimize the impact of one failed test on others, and simplify the considerations when combining tests into suites and cycles for execution.

Page 28? The Automated Testing Handbook Fundamentals of Test Automation? Page 29 However, this approach also requires that some form of timeout processing be available; otherwise, a failed response may cause playback to suspend indefinitely. Remote indicators When a remote host or network server is involved, there is yet another dimension of synchronization. For example, the local application may send a data request to the host; while it is waiting, the application is not busy, thus risking the indication that it has completed its response or is ready for input.

In this case, the tool may provide for protocol-specific drivers, such as IBM 3270 or 5250 emulation, which monitor the host status directly through HLLAPI (high level language application program interface). If your tool does not provide this, you may have to modify your scripts to detect application readiness through more specific means, such as waiting for data to appear. Synchronization is one of the issues that is unique to automated testing. A person performing a manual test instinctively waits for the application to respond or become ready before proceeding ahead.

With automated tests, you need techniques to make this decision so that they are consistent across a wide variety of situations. Page 30? The Automated Testing Handbook Document for transferability It may not be evident from reading an undocumented capture/playback script, for example, that a new window is expected to appear at a certain point; the script may simply indicate that a mouse click is performed at a certain location.

Only the person who created the script will know what was expected; anyone else attempting to execute the script may not understand what went wrong if the window does not appear and subsequent actions are out of context. So, without adequate documentation, transferability from one tester to another is limited. Mystery tests accumulate Ironically, mystery tests tend to accumulate: if you don’t know what a test script does or why, you will be reticent to delete it! This leads to large volumes of tests that aren’t used, but nevertheless require storage, management and maintenance. Always provide enough documentation to tell what the test is expected to do.

More is better Unlike some test library elements, the more documentation, the better! Assume as little knowledge as possible, and provide as much information as you can think of. Document in context The best documentation is inside the test itself, in the form of comments or description, so that it follows the test and explains it in context. Even during capture/playback recording, some test tools allow comments to be inserted.

If this option is not available, then add documentation to test data files or even just on paper. Fundamentals of Test Automation? Page 31 Page 32? The Automated Testing Handbook The Test Framework? Page 33 Following are suggested common functions: SETUP The SETUP function prepares the test environment for execution.

It is executed at the beginning of each test cycle in order to verify that the proper configuration is present, the correct application version is installed, all necessary files are available, and all temporary or work files are deleted. It may also perform housekeeping tasks, such as making backups of permanent files so that later recovery is possible in the event of a failure that corrupts the environment. If necessary, it may also intialize data values, or even invoke sorts that improve database performance. Basically, SETUP means what it says: it performs the setup of the test environment.

It should be designed to start and end at a known point, such as the program manager or the command prompt. SIGNON The SIGNON function loads the application and assures that it is available for execution. It may provide for the prompting of the user ID and password necessary to access the application from the point at which the SETUP routine ends, then operate the application to another known point, such as the main menu area.

It may also be used to start the timer in order to measure the entire duration of the test cycle. SIGNON should be executed after SETUP at the beginning of each test execution cycle, but it may also be called as part of a recovery sequence in the event a test failure requires that the application be terminated and restarted. DRIVER The DRIVER function is one which calls a series of tests together as a suite or cycle.

Some test tools provide this capability, but if yours does not you should plan to develop this function. Ideally, this function relies upon a data file or other means of storing the list of tests to be executed and their sequence; if not, there may be a separately developed and named DRIVER function for each test suite. Page 34?

The Automated Testing Handbook The Test Framework? Page 35 Page 36? The Automated Testing Handbook organized and efficient.

By designing your test framework to include common functions, you can prevent the redundancy that arises when each individual tester attempts to address the same issues. You can also promote the consistency and structure that provides maintainability. The Test Framework? Page 37 WALKTHRU As described above, the WALKTHRU standard test navigates through the application, assuring that each menu item, window and control is present and in the expected default state.

It is useful to establish that a working copy of the application has been installed and that there are no major obstacles to executing functional tests. Each test execution cycle can take advantage of this standard test in order to assure that fatal operational errors are uncovered before time and effort are expended with more detailed tests. This type of test could be executed by the development group after the system build, before the application is delivered for testing, or by the production support group after the application has been promoted into the production environment.

STANDARDS The STANDARDS test is one which verifies that application design standards are met for a given component. While the WALKTHRU test assure that every menu item, window and control is present and accounted for, the STANDARDS test verifies that previously agreed upon standards have been satisfied. Page 38? The Automated Testing Handbook The Test Framework?

Page 39 Page 40? The Automated Testing Handbook Test Case Header Application: Date Created: Last Updated: Test Description: This test case deletes an existing chart of accounts record that has a zero balance. The script DELETE_ACCTS is used to apply the test case.

Inputs: This test case begins at the Account Number edit control; the account number 112 and sub-account number 0000 are entered, then the OK button is clicked. Outputs: The above referenced account is retrieved and displayed. Click DELETE button. The message Account Deleted appears.

All fields are cleared and focus returns to Account Number field. Special requirements: The security level for the initial SIGNON to the general ledger system must permit additions and deletions. Dependencies: Test Case 112-0000 should be executed by the ADD_ACCTS script first so that the record will exist for deletion. before execution. Otherwise, the completed chart of accounts file ALL_ACCTS should be loaded into the database General Ledger 5.1.1 01/01/2X 01/11/2X Test Case ID: 112-0000 By: Teresa Tester By: Lucinda Librarian – The Test Framework?

Page 41 Test Vocabulary Think of your Application Map as defining the vocabulary of your automated tests. This vocabulary spells out what words can be used in the test library to refer to the application and what they mean. Assuring that everyone who contributes to the test process uses the same terminology will not only simplify test development, it will assure that all of the tests can be combined into a central test library without conflict or confusion.

Naming Conventions In order to develop a consistent vocabulary, naming conventions are needed. A naming convention simply defines the rules by which names are assigned to elements of the application. The length and format of the names may be constrained by the operating system and/or test automation tool. In some cases, application elements will be identified as variables in the test script; therefore, the means by which variables are named by the tool may affect your naming conventions.

Also, test scripts will be stored as individual files whose names must conform to the operating system’s conventions for file names. Crossreference names to application Page 42? The Automated Testing Handbook Following is an excerpt from the Application Map for the sample general ledger system; the Data-Driven approach is assumed. Object Names Conventions: Sub-menus are named within the higher level menu; windows are named within their parent menus.

Controls are named within their parent window. Data files are named by the script file that applies them; script files are named by the parent window. Name CHT_ACCTS CHT_ACCTS CHT_ACCTS ACCTNO SUBACCT ACCTDESC STMTTYPE ACCTTYPE HEADER MESSAGE OK CANCEL Description Chart of accounts Text file Script file Account number Sub account number Account description Statement type Account type Header Message Accept record Cancel record Object Type Window .TXT .SLF Edit control Edit control Edit control Radio button List box Check box Information box Push button Push button Parent CHT_MENU CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS The Test Framework? Page 43 Page 44?

The Automated Testing Handbook Change log The test librarian should manage the change control process, keeping either a written or electronic log of all changes to the test library. This change log should list each module affected by the change, the nature of the change, the person responsible, the date and time. Regular backups of the test library are critical, so that unintended or erroneous changes can be backed out if needed.

Test your tests Synchronize with source control There should also be some level of correspondence between the change log for the application source and the test library. Since changes to the application will often require changes to the affected tests, the test librarian may take advantage of the application change log to monitor the integrity of the test library. In fact, it is ideal to use the same source control system whenever possible. If the change to a test reflects a new capability in a different application version, then the new test should be checked into a different version of the test library instead of overwriting the test for the prior version.

See Version Control, following, for more information. Test Library Management? Page 45 application versions require testing; for example, fixes may be added to the version in the field, while enhancements are being added to the next version planned for release.

Multiple test library versions Proper version control of the test library allows a test execution cycle to be performed against the corresponding version of the application without confusing changes made to tests for application modifications in subsequent versions. This requires that more than one version of the test library be maintained at a time. Page 46? The Automated Testing Handbook environment and deliver it in another, since all of these variables will impact the functionality of the system.

Test integrity requires configuration management This means that configuration management for the test environment is crucial to test integrity. It is not enough to know what version of the software was tested: you must know what version and/or configuration of every other variable was tested as well. Granted, you may not always be able to duplicate the production environment in its entirety, but if you at least know what the differences are, you know where to look if a failure occurs. Test Library Management? Page 47 Page 48?

The Automated Testing Handbook What is the skill set of the test team? _____ Primarly technical _____ Some technical, some non-technical _____ Primarily non-technical How well documented is the test process? _____ Well-documented _____ Somewhat documented _____ Not documented How stable is the application? _____ Stable _____ Somewhat stable _____ Unstable Based on your answers to these questions, you should select an automation approach that meets your needs. Each of the approaches is described in more detail below.

Approach Profile Capture/Playback Application already in test phase or maintenance Primarily non-technical test team Somewhat or not documented test process Stable application Data-Driven Application in code or early test phase Some technical, some nontechnical test team Well or somewhat documented test process Stable or somewhat stable application Table-Driven Application in planning, analysis or design Some technical, most non- technical test team Well documented test process Unstable or stable application Selecting a Test Automation Approach? Page 49 These profiles are not hard and fast, but they should indicate the type of approach you should consider. Remember that you have to start from where you are now, regardless of where you want to end up.

With a little prior planning, it is usually possible to migrate from one method to another as time and expertise permits. Select menu item Chart of Accounts>>Enter Accounts Type 100000 Press Tab Type Current Assets Press Tab Select Radio button Balance Sheet Check box Header on Select list box item Asset Push button Accept Verify text @ 562,167 Account Added Notice that the inputs selections from menus, radio buttons, list boxes, check boxes, and push buttons, as well as text and keystrokes are stored in the script. In this particular case, the output the expected message is explicit in the script; this may or may not be true with all tools some simply capture all application responses automatically, instead of allowing or requiring that they be explicitly declared.

See Comparison Considerations below for more information. Page 50? The Automated Testing Handbook Selecting a Test Automation Approach? Page 51 Page 52? The Automated Testing Handbook Requires manual capture Except for reproducing errors, this approach offers very little leverage in the short term; since the tests must be performed manually in order to be captured, there is no real leverage or time savings.

In the example shown, the entire sequence of steps must repeated for each account to be added, updated or deleted. Application must be stable Also, because the application must already exist and be stable enough for manual testing, there is little opportunity for early detection of errors; any test that uncovers an error will most likely have to be recaptured after the fix in order to preserve the correct result. Redunancy and omission Unless an overall strategy exists for how the functions to be tested will be distributed across the test team, the probability of redundancy and/or omission is high: each individual tester will decide what to test, resulting in some areas being repeated and others ignored.

Assuring efficient coverage means you must plan for traceability of the test scripts to functions of the application so you will know what has been tested and what hasn’t. Tests must be combined It is also necessary to give overall consideration to what will happen when the tests are combined; this means you must consider naming conventions and script development standards to avoid the risk of overwriting tests or the complications of trying to execute them as a set. Selecting a Test Automation Approach?

Page 53 Lack of maintainability Although subsequent replay of the tests may offer time savings for future releases, this benefit is greatly curtailed by the lack of maintainability of the test scripts. Because the inputs and outputs are hard-coded into the scripts, relatively minor changes to the application may invalidate large groups of test scripts. For example, changing the number or sequence of controls in a window will impact any test script that traverses it, so a window which has one hundred test transactions executed against it would require one hundred or more modifications for a single change.

Short useful script life This issue is exacerbated by the fact that the test developer will probably require additional training in the test tool in order to be able to locate and implement necessary modifications. Although it may not be necessary to know the script language to capture a test, it is crucial to understand the language when making changes. As a result, the reality is that it is easier to discard and recapture scripts, which leads to a short useful life and a lack of cumulative test coverage. No logic means more tests fail Page 54? The Automated Testing Handbook Selecting a Test Automation Approach?

Page 55 Identify results to verify For fixed screen format character-based applications, the comparison criteria often includes the entire screen by default, with the opportunity to exclude volatile areas such as time and date. In the case of windowed applications or those without a fixed screen format, it may become necessary to rely only on selected areas. In either event, it is critical to evaluate what areas of the display are pertinent to the verification and which are not.

Use text instead of bitmaps when possible Verify by inclusion instead of exclusion If your tool permits it, define the test results by inclusion rather than exclusion. That is, define what you are looking for instead of what you are not looking at such as everything except what is masked out. Explicit result verification is easier to understand and maintain there is no guesswork about what the test is attempting to verify.

Having said that, however, also be aware that minimally defined results may allow errors to go unnoticed: if, for example, system messages may be broadcast asynchronously, then you might miss an error message if you are not checking the system message area. Page 56? The Automated Testing Handbook Of course your tool will control the types of comparison available to you and how it is defined, to some degree. Familiarize yourself with your options and adopt a consistent technique.?

If your test tool can store its scripts in a text format, you can use your favorite word processor to copy the script for a single transaction, then simply search and replace the data values for each iteration. That way, you can create new tests without having to perform them Selecting a Test Automation Approach? Page 57 manually! Page 58? The Automated Testing Handbook Select menu item Chart of Accounts>>Enter Accounts Open file CHTACCTS.TXT Label NEXT Read file CHTACCTS.TXT End of file?

If yes, goto END Type ACCTNO Press Tab Type ACCTDESC Press Tab Select Radio button STMTTYPE Is HEADER = H? * Select radio button for statement * Is account a header? * Enter data for description * Open test data file * Branch point for next record * Read next record in file * Check for end of file * If last record, end test * Enter data for account # If yes, Check Box HEADER on * If so, check header box Select list box item ACCTTYPE * Select list box item for type Push button Accept Verify text MESSAGE If no, Call LOGERROR Press Esc CALL LOGTEST Goto NEXT Label END * Verify message text * If verify fails, log error * Clear any error condition * Log test case results * Read next record * End of test Current Balance Assets Sheet Cash in Balance Banks Sheet Selecting a Test Automation Approach? Page 59 Page 60?

The Automated Testing Handbook Selecting a Test Automation Approach? Page 61 Page 62? The Automated Testing Handbook Selecting a Test Automation Approach? Page 63 Page 64? The Automated Testing Handbook If no, Call LOGERROR If no, Call LOGERROR Select menu item VALUE Resume * If not, log error * If not, log error * Select the menu item * Return to main script Is menu item VALUE enabled? * Is the menu item enabled?

Example file contents: Test Case Add Account Add Account Add Account Add Account Add Account Add Account Add Account Add Account Window MAINMENU Object CHART.MENU Method Select Value Chart of Accounts>>Ent er Accounts 100000 On Pass Carry on Fail Abort Chart of Accounts Graph of Accounts Chart of Accounts Data of Accounts Chart of Accounts Graph and or chart of Accounts Chart of Accounts Accounts Number Consideration Descriptio and Statement Type Header Enter into Continue Continue Enter Current Assets Balance Sheet Continue Continue Select Continue Continue Check On Continue Continue Account Type OK Choose Assets Continue Continue Drive Continue Continue Message Field Verify Text message Account Added Continue Continue Selecting a Evaluation Automation Way? Page sixty-five One document, multiple scripts A single test data document in Table-Driven script is usually processed by multiple scripts.

In addition to common and standard piece, there will be a master script that says the test document and calls the related method intrigue. Each target and approach will have its script made up of the commands and logic necessary to execute it. Multiple records, a single test circumstance A single check case is comprised of multiple records, every containing just one step. Quality case designation should be trapped in each info record that it corelates.

This allows just one set of piece to method multiple check cases. See in the model that the test results are logged for each step, instead of by the end of the check case. Site 66?

The Automated Screening Handbook (screens, windows, controls) and methods have been defined. Selecting a Test out Automation Procedure? Page 67 Minimized protection Portable buildings between applications Another important advantage of Table-Driven is that the test library may be easily ported from one app to another. Since most applications are composed of the identical basic components screens, fields and keys intended for character-based applications; windows and controls intended for graphical applications all that is needed to move from to another is usually to change the labels and advantages of the components. A lot of the logic and common sessions can be still left intact.

Lightweight architecture between tools This approach is also portable between check tools. Given that the actual script dialect has the equivalent set of commands, test cases in this formatting could be executed by a software library developed in any device. This means you are free to work with different equipment for different platforms if necessary, or migrate to another tool.

Zero tool knowledge needed to make test cases Because common sense is defined and placed only once per method, and substantial implied logic for verifying the context and state of the application to assure proper playback, individual testers may make test situations without understanding logic or programming. All that is required is a knowledge of the app components, their particular names, and the valid strategies and principles which apply to them. Page 68? The Automated Assessment Handbook Picking out a Test Automation Approach?

Webpage 69 Planning Requirements Style Code Check Maintain Testware Test Prepare Test Circumstances Test Intrigue Test Execution/Maintenance Unfortunately, not every test attempts commence in the earliest level of the software development method. Depending on exactly where your application is in the timeline, these types of activities may be compressed and slide for the right, but also in general all these steps should be completed. Site 70?

The Automated Testing Handbook You must also be sure that the person in each role provides the requisite expert to carry out their particular responsibilities; for example , the team leader must have control of the work of the team members, and the check librarian should be able to put in force procedures to get change and version control. Following will be suggested members of the test team and the respective obligations: Team Innovator The Team Head is responsible for expanding the Test Prepare and managing the team people according to it, and also coordinating with other areas to achieve the test effort. The Team Leader must have the authority to assign responsibilities and control the work of those who also are dedicated to test team.

Test out Developers Check Developers are experts inside the application functionality, responsible for developing the test situations, executing them, analyzing and reporting the results. They should be trained on how to develop testing, whether while data records or as scripts, and use the test framework. Test Automation Process?

Page 71 Script Designers Script Programmers are professionals in the testing tool, ideally with technological programming knowledge. They are accountable for developing and maintaining the test framework and supporting piece and submitting the Application Map. Test Librarian The Test Librarian is responsible for controlling the settings, change and version control for all portions of the test library. This includes defining and enforcing check in and check out methods for all data and related documentation. Consumer Liaison The consumer Liaison represents the user community of the program under test and is responsible for last approval of the test prepare or any becomes it, as well as for working with the Test Developers to distinguish test instances and gather sample papers and data.

Even though the Consumer Liaison will not be a dedicated section of the testing business, he or she must have got dotted line responsibility to the Check Team to assure the acceptance criteria will be communicated and met. Creation Liaison The Development Liaison presents the developers who will provide the application application for ensure that you is responsible for delivering unit evaluation cases and informing the Test Librarian of any changes to the application or perhaps its environment. Even though the Development Liaison will not be a dedicated part of the testing corporation, he or she must include dotted line responsibility to the Check Team to assure the software is definitely properly unit tested and delivered within a known state to the Evaluation Team. Site 72?

The Automated Testing Handbook Systems Liaison The Systems Liaison represents the program or network support group and database officer, and is responsible for supporting the test environment to make sure that the Test out Team has access to the correct platform setup and repository for test execution. The Systems Addition must also inform the Test Librarian of any changes to test platform, configuration or database. This section is used to control additions and becomes the plan. Since the plan will likely be modified after some time, keeping track of all of the changes is important. The Test Automation Process?

Page 73 Describe the application form under test in this section. Be sure to specify the type number. If only a part is to be computerized, describe that as well. The statement of scope is just as important to illustrate what will end up being tested since what will not really be, and who will end up being responsible.

List the names and roles from the test associates, and cross-reference each of the steps to the liable party(ies). Webpage 74? The Automated Assessment Handbook Be sure you have a handle for the test environment and settings. These elements can affect match ups and performance just as much as the application alone. This includes every thing about the planet, including operating systems, databases and any alternative party software.

SIGNOFF by team leader Defect reporting SIGNOFF by consumer The Test Automation Process? Page 75 Improvements completed Application installed Performance completed SIGNOFF by advancement SIGNOFF simply by team leader SIGNOFF simply by team head Defect confirming Result evaluation SIGNOFF by customer SIGNOFF by team leader Changes completed App installed SIGNOFF by creation SIGNOFF simply by team innovator Ad hoc and usability tests Performance screening Result evaluation and defect reporting Program release Simply no known or waived problems No regarded or waived defects Most tests performed SIGNOFF simply by customer SIGNOFF by systems SIGNOFF by team head No noted or waived defects SIGNOFF by every test staff Planning test Cycle Webpage 76?

The Automated Screening Handbook Within an automated environment, the test cycle must be cautiously planned to reduce the amount of oversight or discussion required. Ideally, an setup cycle should be capable of automatically planning and validating the test environment, executing check suites or perhaps individual testing in sequence, producing test consequence reports, and performing final cleanup. The Test Automation Process?

Page 77 tion individual tests, first and finishing context, and also any info or series dependencies with other test rooms. Page 78? The Automatic Testing Handbook Context First and finishing context of the cycle should be the same stage, usually this program manager or command fast.

Care needs to be taken to synchronize the bedrooms within the cycle to assure the fact that context intended for the initial and previous suite meets this necessity. In addition to assuring that the test system is designed, it may be Test Automation Method? Page 79 important to initialize the state of the database or perhaps other data elements. For example , a clean version from the database may be restored, or maybe a subset appended or rewritten, in order to assure that the data is a noted state ahead of testing begins.

Data components, such as problem counters, may also require initialization to assure that previous test out results had been cleared. Plan sequence A test routine is often comprised of a set of check cycles. The sequence should certainly reflect virtually any dependencies of either circumstance or data, and standard tests, such as a WALKTHU, ought to be packaged too.

A test schedule design template may be useful for assuring that most standard tests and jobs are included for each work. Cleanup The cycle ought to end together with the cleanup of the test environment, such as removing work documents, making document backups, putting together historical outcomes, and any other housekeeping jobs. Page eighty? The Automatic Testing Guide Test Execution? Page seventy eight Performance measurements may also include the overall time required to perform certain functions, such as a file update or perhaps other group process.

It is of course essential to establish the performance conditions for the application form under test, then assure that the necessary checks are carried out and measurements taken to confirm whether the requirements are in fact achieved or not really. Configuration Every test record should clearly indicate the configuration against which it was executed. This may take the form of a header area or comments. If subsequent records show widely varying effects, such as in the area of performance, then simply any changes to the construction may give a clue.

Totals Total check cases accomplished, passed and failed, as well as the elapsed period overall, must be provided at the end of the delivery log to simplify the updating of historical tendencies. Page 82?

The Computerized Testing Handbook Version: Check Case Begin 08: 10: 12 ’08: 11: 12-15 08: doze: 23 08: 13: twenty nine 08: 16: 34 08: 15: forty-five 08: of sixteen: 53 ’08: 18: 05 08: 19: 20 08: 19: thirty-three 08: twenty-one: 02 ’08: 12: 21 years old 08: 13: 25 08: 14: 23 08: 15: 42 08: 16: 50 08: 18: 01 08: 19: 18 08: nineteen: 28 ’08: 20: 54 08: twenty-two: 19 Instances Failed: Approved Passed Approved Passed Passed Passed Approved Passed Handed Failed you Circumstances Passed: 9 Test Record Summary Failed: Priority you Priority a couple of Priority a few Total Exceeded: Total Performed Ratios: Prior 9 49 70 137 172 217 21% flaws 55% recurrence New twelve 10 twenty-five 45 Fixed 9 twenty two 30 sixty one Remaining 15 46 sixty five 121 Check Execution? Page 83 Webpage 84?

The Automated Tests Handbook Test Execution? Page 85 one problem when fixing an additional, test instances and scripts are controlled by error the moment modifications are produced. Duplicate inability A duplicate inability is a inability which is attributable to the same trigger as another inability.

For example , if a window subject is misspelt, this should become reported while only one problem; however , according to what the check is verifying, the name of the windows might be as opposed multiple times. It is far from accurate to report the same failure again and again, as this will likely skew test out results. For instance , if a heavily-used transaction windowpane has an mistake, this mistake may be reported for every transaction that is created it; therefore , if you will find five hundred orders, there will be five hundred errors reported.

Once that error is usually fixed, the amount of errors will drop by five-hundred. Using these kinds of figures to measure program readiness or perhaps project enough time for launch is risky: it may well appear which the application is definitely seriously malfunctioning, but the errors are being corrected in a astronomical charge neither of which is true. False accomplishment from test defect A false success occurs when a test fails to confirm one or more facets of the behavior, therefore reporting that the test was successful once in fact it absolutely was not. This could happen for a number of reasons. One reason could possibly be that the test itself has a defect, for example a logic way that drops processing through the test so that it bypasses particular steps.

This type of false success can be discovered by measurements such as passed time: in case the test wraps up too quickly, for example , this might suggest that it would not execute properly. Page 86? The Automated Testing Handbook False achievement from missed error Test out Execution?

Webpage 87 Site 88? The Automated Screening Handbook A source level tool is necessary to provide this kind of metric, and quite often it requires the fact that code on its own be instrumented, or modified, in order to catch the measurement. Because of this, developers are usually the sole ones prepared to capture this metric, and then only during their unit check phase. Though helpful, code coverage is not an sure indicator of test coverage.

Just because almost all code was executed throughout the test, that doesn’t means that errors happen to be unlikely. It takes only a single line or perhaps character of code to create a problem. Likewise, code coverage only actions the code that is available: it can’t measure the code that is missing. When it is obtainable, however , code coverage may be used to help you evaluate how thorough your test out cases are. If your protection is low, analyze areas which are certainly not exercised to determine what types of testing need to be added.

Requirements coverage Requirements insurance coverage measures the proportion of the requirements that were analyzed. Again, just like code insurance, this does not indicate the requirements had been met, just that they were tested. For this metric to get truly significant, you must monitor the difference between simple coverage and effective coverage. You will find two prerequisites to this metric: one, the requirements happen to be known and documented, and two, the fact that tests will be crossreferenced towards the requirements. In many cases, the application requirements are not written about sufficiently for this metric that must be taken or become meaningful.

If they are documented, although, this way of measuring can tell you ways much of the predicted functionality has been tested. Requirements However , in case you have taken attention to affiliate requirements together with your Test Metrics? Page fifth 89 satisfied test out cases, you may be able to measure the percentage in the requirements that have been met that is, the phone number that passed the test. Ultimately, this is a far more meaningful measurement, since it tells you how close the application is to meeting their intended purpose. Because requirements can vary coming from critical to important to appealing, Priority Requirements simple percentage coverage may well not tell you enough.

It is better to rate requirements by top priority, or risk, then assess coverage each and every level. For example , priority level 1 requirements might be the ones that must be met for the device to be functional, priority two those that should be met pertaining to the system being acceptable, level 3 those that are necessary but is not critical, level 4 those that are desired, and level 5 those that are beauty. In this plan, 100% successful coverage of level you and a couple of requirements will be more important than 90% coverage of all requirements; even lacking a single level 1 could render the program unusable.

If you are strapped for as well as resources (and who isn’t), it is really worth the extra a chance to rate your requirements so you can determine your improvement and the application’s readiness when it comes to the good coverage of priority requirements, instead of trading precious assets in low priority tests. Exit standards Successful requirements coverage can be described as useful quit criteria for the test procedure. The criteria intended for releasing the applying into creation, for example , could be successful protection of all level 1 through 3 goal requirements.

Simply by measuring the percentage of requirements tested versus the number of uncovered errors, you might extrapolate the amount of remaining errors given the number of requirements. But as using metrics, don’t use them to kid yourself. If you have just defined 1 requirement, 100% coverage is definitely not meaningful!

Test circumstance Test circumstance coverage actions how various test cases have been Site 90? The Automated Tests Handbook insurance coverage executed. Again, be sure to differentiate between just how many passed and how many were merely executed. In order to capture this metric, you have an accurate depend of how many test situations have been described, and you need to log away each test case that is certainly executed and whether this passed or failed.

Forecasting time to discharge Test circumstance coverage is advantageous for tracking progress within a test pattern. By suggesting how most of the test cases have been executed in a offered amount of time, you can more accurately estimate how much time is needed to evaluation the remainder. Additional, by assessing the rate from which errors have been uncovered, you can even make a far more educated suppose about how many remain to be found. As a straightforward example, in case you have executed 50 percent of your check cases in a single week, you may predict that you will need another week to complete the cycle. If you have identified ten mistakes so far, you might estimate that you have that many again waiting found.

By foreseeing in the price at which errors are staying corrected (more on this below), you could also scale how long it will require to turn about fixes and complete another check cycle. Defect Ratio Correct rate Instead of a percentage, the fix price measures how long it takes for a Test Metrics? Page 91 Page 92? The Computerized Testing Guide mayhem in production. That is why, it is important to be aware of not just just how many of these you will find, but what their particular severity is usually and how that they could have been avoided.

As reviewed earlier, requirements should be prioritized to determine their particular criticality. Post-release defects ought to likewise become rated. Important 1 defect one that renders the system unusable should the natural way get more interest than a aesthetic defect. Hence, a simple statistical count is definitely not as meaningful. Defect elimination Once a defect is identified and scored, the next question ought to be when and exactly how it could have been completely prevented.

Remember that this question is not about assessing blame, it is about constant process improvement. If you don’t learn from your mistakes, you are certain to repeat these people. Determining each time a defect could have been prevented refers to what phase of the creation cycle it should have been discovered in. For example , a debilitating performance problem caused by not enough hardware solutions should likely have been unveiled during the planning phase; a missing feature or function should have been raised during the requirements or design stages.

In some cases, the defect might arise via a regarded requirement but schedule pressures during the check phase may possibly have prevented the appropriate test out cases via being created and carried out. Continuous improvement Whatever the period, learn from the situation and start measures elevate it. For example , when pressure comes up during a afterwards cycle to release the product with no thorough test phase, the known influence of doing therefore in a previous cycle may be weighed resistant to the cost of postpone. A regarded risk is easier to evaluate than an unknown a single.

Test Metrics? Page 93 Page 94? The Computerized Testing Guide Management Reporting? Page 96 Saving money Saving time Having the application in to the market or perhaps back into development faster as well saves the corporation time.

In our above model, you are shaving 3. 6 several weeks off the discharge time (3 iterations occasions 48 hours/40 hours every week). This is certainly almost monthly of time personal savings for each relieve. If the basis for the release is usually to correct mistakes, that additional time could lead to significant productivity. Page 96? The Computerized Testing Guide Higher quality It is difficult to measure the impact better quality: you can’t seriously measure the sum of money you aren’t spending.

If you do a thorough job of testing and prevent flaws from getting into production, you have saved funds by certainly not incurring downtime or over head from the problem. Unfortunately, few companies understand the cost to correct an error. The simplest way to tell for anyone who is making improvement is if the post-release problem rate diminishes.

Better insurance Even if you can’t tell just what it is saving the company, simply measure the elevating number of check cases which might be executed for every release. Should you assume that more tests imply fewer problems in creation, this expanded coverage offers value. Management Reporting?

Web page 97 Another reason to analyze famous trends is that you can assess the impact of changes in the method. For example , instituting design testimonials or code walkthroughs may well not show instant results, nevertheless later could be reflected as a reduced defect ratio. Web page 98?

The Automated Tests Handbook Administration Reporting? Page 99