Monday, April 27, 2009

Web Site Testing

 

ABSTRACT

The instant worldwide audience of any Web Browser Enabled Application -- a WebSite -- make its quality and reliability crucial factors in its success. Correspondingly, the nature of WebSites and Web Applications pose unique software testing challenges. Webmasters, Web applications developers, and WebSite quality assurance managers need tools and methods that meet their specific needs. Mechanized testing via special purpose Web testing software offers the potential to meet these challenges. Our technical approach, based on existing Web browsers, offers a clear solution to most of the technical needs for assuring WebSite quality.

BACKGROUND

WebSites impose some entirely new challenges in the world of software quality! Within minutes of going live, a Web application can have many thousands more users than a conventional, non-Web application. The immediacy of the Web creates immediate expectations of quality and rapid application delivery, but the technical complexities of a WebSite and variances in the browser make testing and quality control that much more difficult, and in some ways, more subtle, than "conventional" client/server or application testing. Automated testing of WebSites is an opportunity and a challenge.

DEFINING WEBSITE QUALITY & RELIABILITY

Like any complex piece of software there is no single, all inclusive quality measure that fully characterizes a WebSite (by which we mean any web browser enabled application).

Dimensions of Quality. There are many dimensions of quality; each measure will pertain to a particular WebSite in varying degrees. Here are some common measures:

  • Timeliness: WebSites change often and rapidly. How much has a WebSite changed since the last upgrade? How do you highlight the parts that have changed?

  • Structural Quality: How well do all of the parts of the WebSite hold together? Are all links inside and outside the WebSite working? Do all of the images work? Are there parts of the WebSite that are not connected?

  • Content: Does the content of critical pages match what is supposed to be there? Do key phrases exist continually in highly-changeable pages? Do critical pages maintain quality content from version to version? What about dynamically generated HTML (DHTML) pages?

  • Accuracy and Consistency: Are today's copies of the pages downloaded the same as yesterday's? Close enough? Is the data presented to the user accurate enough? How do you know?

  • Response Time and Latency: Does the WebSite server respond to a browser request within certain performance parameters? In an e-commerce context, how is the end-to-end response time after a SUBMIT? Are there parts of a site that are so slow the user discontinues working?

  • Performance: Is the Browser --> Web --> ebSite --> Web --> Browser connection quick enough? How does the performance vary by time of day, by load and usage? Is performance adequate for e-commerce applications? Taking 10 minutes -- or maybe even only 1 minute -- to respond to an e-commerce purchase may be unacceptable!

Impact of Quality. Quality remains is in the mind of the WebSite user. A poor quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc., may cost a lot in poor customer relations, lost corporate image, and even in lost sales revenue. Very complex, disorganized WebSites can sometimes overload the user.

The combination of WebSite complexity and low quality is potentially lethal to Company goals. Unhappy users will quickly depart for a different site; and, they probably won't leave with a good impression.

WEBSITE ARCHITECTURAL FACTORS

A WebSite can be quite complex, and that complexity -- which is what provides the power, of course -- can be a real impediment in assuring WebSite Quality. Add in the possibilities of multiple WebSite page authors, very-rapid updates and changes, and the problem compounds.

Here are the major pieces of WebSites as seen from a Quality perspective.

Browser. The browser is the viewer of a WebSite and there are so many different browsers and browser options that a well-done WebSite is probably designed to look good on as many browsers as possible. This imposes a kind of de facto standard: the WebSite must use only those constructs that work with the majority of browsers. But this still leaves room for a lot of creativity, and a range of technical difficulties. And, multiple browsers' renderings and responses to a WebSite have to be checked.

Display Technologies. What you see in your browser is actually composed from many sources:

  • HTML. There are various versions of HTML supported, and the WebSite ought to be built in a version of HTML that is compatible. This should be checkable.

  • Java, JavaScript, ActiveX. Obviously JavaScript and Java applets will be part of any serious WebSite, so the quality process must be able to support these. On the Windows side, ActiveX controls have to be handled well.

  • Cgi-Bin Scripts. This is link from a user action of some kind (typically, from a FORM passage or otherwise directly from the HTML, and possibly also from within a Java applet). All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.) need to be handled, and tests need to check "end to end" operation. This kind of a "loop" check is crucial for e-commerce situations.

  • Database Access. In e-commerce applications you are either building data up or retrieving data from a database. How does that interaction perform in real world use? If you give in "correct" or "specified" input does the result produce what you expect?

    Some access to information from the database may be appropriate, depending on the application, but this is typically found by other means.

    Navigation. Users move to and from pages, click on links, click on images (thumbnails), etc. Navigation in a WebSite is often complex and has to be quick and error free.

    Object Mode. The display you see changes dynamically; the only constants are the "objects" that make up the display. These aren't real objects in the OO sense; but they have to be treated that way. So, the quality test tools have to be able to handle URL links, forms, tables, anchors, buttons of all types in an "object like" manner so that validations are independent of representation.

    Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up. Obviously, InterNet loading affects this too, but this factor is often outside the Webmaster's control at least in terms of how the WebSite is written. Instead, it seems to be more an issue of server hardware capacity and throughput. Yet, if a WebSite becomes very popular -- this can happen overnight! -- loading and tuning are real issues that often are imposed -- perhaps not fairly -- on the WebMaster.

    Interaction & Feedback. For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.

    Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications.

    WEBSITE TEST AUTOMATION REQUIREMENTS

    Assuring WebSite quality requires conducting sets of tests, automatically and repeatably, that demonstrate required properties and behaviors. Here are some required elements of tools that aim to do this.

    Test Sessions. Typical elements of tests involve these characteristics:

    • Browser Independent. Tests should be realistic, but not be dependent on a particular browser, whose biases and characteristics might mask a WebSite's problems.

    • No Buffering, Caching. Local caching and buffering -- often a way to improve apparent performance -- should be disabled so that timed experiments are a true measure of the Browser-Web-WebSite-Web-Browser response time.

    • Fonts and Preferences. Most browsers support a wide range of fonts and presentation preferences, and these should not affect how quality on a WebSite is assessed or assured.

    • Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be treatable in object mode, i.e. independent of the fonts and preferences.

      Object mode operation is essential to protect an investment in test suites and to assure that test suites continue operating when WebSite pages experience change. In other words, when buttons and form entries change location on the screen -- as they often do -- the tests should still work.

      However, when a button or other object is deleted, that error should be sensed! Adding objects to a page clearly implies re-making the test.

    • Tables and Forms. Even when the layout of a table or form varies in the browser's view, tests of it should continue independent of these factors.

    • Frames. Windows with multiple frames ought to be processed simply, i.e. as if they were multiple single-page frames.

    Test Context. Tests need to operate from the browser level for two reasons: (1) this is where users see a WebSite, so tests based in browser operation are the most realistic; and (2) tests based in browsers can be run locally or across the Web equally well. Local execution is fine for quality control, but not for performance measurement work, where response time including Web-variable delays reflective of real-world usage is essential.

    WEBSITE DYNAMIC VALIDATION

    Confirming validity of what is tested is the key to assuring WebSite quality -- the most difficult challenge of all. Here are four key areas where test automation will have a significant impact.

    Operational Testing. Individual test steps may involve a variety of checks on individual pages in the WebSite:

    • Page Consistency. Is the entire page identical with a prior version? Are key parts of the text the same or different?

    • Table, Form Consistency. Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place".

    • Page Relationships. Are all of the links on a page the same as they were before? Are there new or missing links? Are there any broken links?

    • Performance Consistency, Response Times. Is the response time for a user action the same as it was (within a range)?

    Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run tests in a variety of modes:

      Unattended Testing. Individual and/or groups of tests should be executable singly or in parallel from one or many workstations.

    • Background Testing. Tests should be executable from multiple browsers running "in the background" on an appropriately equipped workstation.

    • Distributed Testing. Independent parts of a test suite should be executable from separate workstations without conflict.

    • Performance Testing. Timing in performance tests should be resolved to the millisecond; this gives a strong basis for averaging data.

    • Random Testing. There should be a capability for randomizing certain parts of tests.

    • Error Recovery. While browser failure due to user inputs is rare, test suites should have the capability of resynchronizing after an error.

    Content Validation. Apart from how a WebSite responds dynamically, the content should be checkable either exactly or approximately. Here are some ways that content validation could be accomplished:

    • Structural. All of the links and anchors should match with prior "baseline" data. Images should be characterizable by byte-count and/or file type or other file properties.

    • Checkpoints, Exact Reproduction. One or more text elements -- or even all text elements -- in a page should be markable as "required to match".

    • Gross Statistics. Page statistics (e.g. line, word, byte-count, checksum, etc.).

    • Selected Images/Fragments. The tester should have the option to rubber band sections of an image and require that the selection image match later during a subsequent rendition of it. This ought to be possible for several images or image fragments.

    Load Simulation. Load analysis needs to proceed by having a special purpose browser act like a human user. This assures that the performance checking experiment indicates true performance -- not performance on simulated but unrealistic conditions. There are many "http torture machines" that generate large numbers of http requests, but that is not necessarily the way real-world users generate requests.

    Sessions should be recorded live or edited from live recordings to assure faithful timing. There should be adjustable speed up and slow down ratios and intervals.

    Load generation should proceed from:

    • Single Browser Sessions. One session played on a browser with one or multiple responses. Timing data should be put in a file for separate analysis.

    • Multiple Independent Browser Sessions. Multiple sessions played on multiple browsers with one or multiple responses. Timing data should be put in a file for separate analysis. Multivariate statistical methods may be needed for a complex but general performance model.

    TESTING SYSTEM CHARACTERISTICS

    Considering all of these disparate requirements, it seems evident that a single product that supports all of these goals will not be possible. However, there is one common theme and that is that the majority of the work seems to be based on "...what does it [the WebSite] look like from the point of view of the user?" That is, from the point of view of someone using a browser to look at the WebSite.

    This observation led our group to conclude that it would be worthwhile trying to build certain test features into a "test enabled web browser", which we called eValid.

    Browser Based Solution. With this as a starting point we determined that the browser based solution had to meet these additional requirements:

    • Commonly Available Technology Base. The browser had to be based on a well known base (there appear to be only two or three choices).

    • Some Browser Features Must Be Deletable. At the same time, certain requirements imposed limitations on what was to be built. For example, if we were going to have accurate timing data we had to be able to disable caching because otherwise we are measuring response times within the client machine rather than "across the web."

    • Extensibility Assured. To permit meaningful experiments, the product had to be extensible enough to permit timings, static analysis, and other information to be extracted.
    Taking these requirements into account, and after investigation of W3C's Amaya Browser and the open-architecture Mozilla/Netscape Browser we chose the IE Brower as our initial base for our implementation of eValid.

    User Interface. How the user interacts with the product is very important, in part because in some cases the user will be someone very familiar with WebSite browsing and not necessarily a testing expert. The design we implemented takes this reality into account.

    • Pull Down Menus. In keeping with the way browsers are built, we put all the main controls for eValid on a set of Pull Down menus, as shown in the accompanying screen shot.

      eValid Pull Down

      Figure 1. eValid Menu Functions.

    • "C" Scripting. We use interpreted "C" language as the control language because the syntax is well known, the language is fully expressive of most of the needed logic, and because it interfaces well with other products.

    • Files Interface. We implemented a set of dialogs to capture critical information and made each of them recordable in a text file. The dialogs are associated with files that are kept in parallel with each browser invocation:

      • Keysave File. This is the file that is being created -- the file is shown line by line during script recording as the user moves around the candidate WebSite.

      • Timing File. Results of timings are shown and saved in this file.

      • Messages File. Any error messages encountered are delivered to this file. For example, if a file can't be downloaded within the user-specified maximum time an error message is issued and the playback continues. (This helps preserve the utility of tests that are partially unsuccessful.)

      • Event File. This file contains a complete log of recording and playback activities that is useful primarily to debug a test recording session or to better understand what actually went on during playback.

    Operational Features. Based on prior experience, the user interface for eValid had to provide for several kinds of capabilities already known to be critical for a testing system. Many of these are critically important for automated testing because they assure an optimal combination of test script reliability and robustness.

    • Capture/Replay. We had to be able both to capture a user's actual behavior online, and be able to create scripts by hand.

    • Object Mode. The recording and playback had to support pure-Object Mode operation. This was achieved by using internal information structures in a way that lets the scripts (either recorded or constructed) to refer to objects that are meaningful in the browser context.

      A side benefit of this was that playbacks were reliable, independent of the rendering choices made by the user. A script plays back identically the same, independent of browser window size, type-font choices, color mappings, etc.

    • [Adjustable] True-Time Mode. We assured realistic behavior of the product by providing for recording of user-delays and for efficient handling of delays by incorporating a continuously variable "playback delay multiplier" that can be set by the user.

    • Playback Synchronization. For tests to be robust -- that is, to reliably indicate that a feature of a WebSite is working correctly -- there must be a built-in mode that assures synchronization so that Web-dependent delays don't interfere with proper WebSite checking. eValid does this using a proprietary playback synchronization method that waits for download completion (except if a specified maximum wait time is exceeded).

    • Timer Capability. To make accurate on-line performance checks we built in a 1 millisecond resolution timer that could be read and reset from the playback script.

    • Validate Selected Text Capability. A key need for WebSite content checking, as described above, is the ability to capture an element of text from an image so that it can be compared with a baseline value. This feature was implemented by digging into the browser data structures in a novel way (see below for an illustration). The user highlights a selected passage of the web page and clicks on the "Validate Selected Text" menu item.

      eValid Pull Down

      Figure 2. Illustration of eValid Validate Selected Text Feature.

      What results is a recorded line that includes the ASCII text of what was selected, plus some other information that locates the text fragment in the page. During playback if the same text is not found at the same location an error message is generated.

    • Multiple-playback. We confirmed that multiple playback was possible by running separate copies of the browser in parallel. This solved the problem of how to multiply a single test session into a number of test sessions to simulate multiple users each acting realistically.

    Test Wizards. In most cases manual scripting is too laborious to use and making a recording to achieve a certain result is equally unacceptable. We built in several test wizards that mechanize some of the most common script-writing chores.

    • Link Wizard. This wizard creates a script based on the current Web page that visits every link in the page. Scripts created this way are the basis for "link checking" test suites that confirm the presence (but not necessarily the content) of URLs.

      Here is a sample of the output of this wizard, applied to our standard sample test page example1.html:

       # Static Simple Link Test Wizard starting...  GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html#bo" \ 	"ttom" "" GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html#ta" \ 	"rget" "" GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html#no" \ 	"tdefined" "" GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html" "" GotoLink 0 "http://www.soft.com/Products/Web/CAPBAK/example1/example1.no" \ 	"toutside.html" "" GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html#to" \ 	"p" "" GotoLink 0 "http://www.soft.com/eValid/Products/example1/example1.html" ""  # Static Simple Link Test Wizard ended.   	
      Figure 3. Sample of Output of Link Test Wizard.

    • FORM Wizard. For E-Commerce testing which involves FORMS we included in the system a FORM Wizard that generates a script that:
      • Initializes the form.
      • Presses each pushbutton by name.
      • Presses each radio button by name.
      • Types a pre-set script fragment into each text field.
      • Presses SUBMIT.

      Here is a sample of the output of this wizard, applied to our standard test page: example1.html:

        # Form Test Wizard starting...  InputValue 0 69 "SELECT-ONE" "list" "Top of Page?" "0" "" InputValue 0 90 "RADIO" "check" "buying-now" "TRUE" "" InputValue 0 92 "RADIO" "check" "next-month" "TRUE" "" InputValue 0 94 "RADIO" "check" "just-looking" "TRUE" "" InputValue 0 96 "RADIO" "check" "no-interest" "TRUE" "" InputValue 0 103 "CHECKBOX" "concerned" "Yes" "TRUE" "" InputValue 0 105 "CHECKBOX" "info" "Yes" "TRUE" "" InputValue 0 107 "CHECKBOX" "evaluate" "Yes" "TRUE" "" InputValue 0 109 "CHECKBOX" "send" "Yes" "TRUE" "" InputValue 0 121 "SELECT-MULT" "product" "All eValid Products||Site Anal" \ 	"ysis||Regression Testing||Advanced Application Monitoring||Performance, " \ 	"Load/Testing" "0,1,2,3,4" "" InputValue 0 131 "SELECT-ONE" "immediacy" "Never" "2" "" InputValue 0 141 "TEXT" "name" "eValid" "" "" InputValue 0 143 "TEXT" "phone" "eValid" "" "" InputValue 0 145 "TEXT" "email" "eValid" "" "" InputValue 0 147 "TEXT" "number" "eValid" "" "" InputValue 0 150 "TEXTAREA" "comment" "eValid\\\\eValid" "" "" SubmitClick 0 156 "submit" "SUBMIT DATA" "" GoBackTo 0 1 "http://www.soft.com/eValid/Products/example1/example1.html" "" # Form Test Wizard ended.   		
      Figure 4. Sample of Output of FORM Test Wizard.

      The idea is that this script can be processed automatically to produce the result of varying combinations of pushing buttons. As is clear, the wizard will have pushed all buttons, but only the last-applied one in a set of radio buttons will be left in the TRUE state.

    • Text Wizard. For detailed content validation this wizard yields up a script that includes in confirmation of the entire text of the candidate page. This script is used to confirm that the content of a page has not changed (in effect, the entire text content of the subject is recorded in the script).

    EXAMPLE USES

    Early application of the eValid system have been very effective in producing experiments and collecting data that is very useful for WebSite checking. While we expect eValid to be the main engine for a range of WebSite quality control and testing activities, we've chosen two of the most typical -- and most important -- applications to illustrate how eValid can be used.

    Performance Testing Illustration. To illustrate how eValid measures timing we have built a set of Public Portal Performance Profile TestSuites that have these features:

    • Top 20 Web Portals. We selected 20 commonly available WebSites on which to measure response times. These are called the "P4" suites.

    • User Recording. We recorded one user's excursion through these suites and saved that keysave file (playback script).

    • User Recording. We played back the scripts on a 56 kbps modem so that we had a realistic comparison of how long it would take to make this very-full visit to our selected 20 portals.

    • P4 Timings. We measured the elapsed time it took for this script to execute at various times during the day. The results from one typical day's executions showed a playback time range of from 457 secs. to 758 secs (i.e. from -19% of the average to +36% of the average playback time).

    • Second Layer Added. We added to the base script a set of links to each page referenced on the same set of 20 WebSites. This yielded the P4+ suite that visist some 1573 separate pages, or around 78 per WebSite. The testsuite takes around 20,764 secs (~5 Hrs 45 mins) to execute, or an average of 1038 secs per WebSite.

    • Lessons Learned. It is relatively easy to configure a sophisticated test script that visits many links in a realistic way, and provides realistic user-perceived timing data.

    E-Commerce Illustration. This example shows a typical E-Commerce product ordering situation. The script automatically places an order and uses the Validate Selected Text sequence to confirm that the order was processed correctly. In a real-world example this is the equivalent of (i) selecting an item for the shopping basket, (ii) ordering it, and (iii) examining the confirmation page's order code to assure that the transaction was successful. (The final validation step of confirming that the ordered item was actually delivered to a specific address is also part of what eValid can do -- see below.)

    • Example Form. We base this script on a sample page shown below. This page is intended to have a form that shows an ordering process. On the page the "Serial Number" is intended as a model of a credit card number.

      example1.html Page

      Figure 5. Sample Input Form For E-Commerce Example.

    • Type-In with Code Number. Starting with the FORM Wizard generated script, we modify it to include only the parts we want, and include the code number 8889999.

    • Response File. Once the playback presses the SUBMIT button, the WebServer response page shows up, as shown below.

      example1.html Response Page

      Figure 6. Response Page for E-Commerce Example.

    • Error Message Generated. If the Cgi-Bin scripts make a mistake, this will be caught during playback because the expected text 8889999 will no be present.

    • Completed TestScript. Here is the complete testscript for eValid that illustrates this sequence of activities.

       ProjectID "Project" GroupID "Group" TestID "Test" LogID "AUTO"   ScreenSize 1280 960 FontSize 0 InitLink "http://www.soft.com/eValid/Products/example1/example1.html" Wait 3838 InputValue 0 141 "TEXT" "name" "Mr. Software" "" "" Wait 3847 InputValue 0 143 "TEXT" "phone" "415-861-2800" "" "" Wait 5168 InputValue 0 145 "TEXT" "email" "info@soft.com" "" "" Wait 2123 InputValue 0 147 "TEXT" "number" "9999" "" "" Wait 3265 InputValue 0 150 "TEXTAREA" "comment" " Testing" "" "" SubmitClick 0 156 "submit" "SUBMIT DATA" "" NAV Wait 5135 ValidateSelectedText 0 12 278 "88899999" "" # End of script.    
      Figure 7. Script for E-Commerce Test Loop.

    • Lessons Learned. This examples illustrates how it is possible to automatically validate a website using eValid by detecting when an artificial order is mis-processed.

Wednesday, April 22, 2009

Software Testing

Welcome

 To The World of Software Testing

 What Is Software Testing

The British Standards Institution, in their standard BS7925-1, define testing as "the process of exercising software to verify that it satisfies specified requirements and to detect faults; the measurement of software quality. Where the actual behavior of the system is different from the expected behavior, a failure is considered to have occurred.

A failure is the result of a fault. A fault is an error in the programming or specification that may or may not result in a failure. A failure is the manifestation of fault.

The principal aim of testing is to detect faults so that they can be removed before the product is made available to customers. Faults in software are made for a variety of reasons, from misinterpreting the requirements through to simple typing mistakes. It is the role of software testing and quality assurance, to reduce those faults by identifying the failures.

 

Testing is a process of executing a program and comparing the results to an agreed upon standard called requirements. If the results match the requirements, then the software has passed testing.

There are several methods of testing. There is exploratory testing, scripted, ad-hoc, regression and many more variations.

Testing involves operation of a system or application under controlled conditions and evaluating the results

Testing is a process for trying out a piece of software with data in valid or invalid condition in a controlled manner.

Focus on trying to find bugs

The goal of software testing should always be to find as many faults as possible(and find them early). If you set out with the goal of testing your software works, then you will prove it works, you will not prove that it doesn't break.

For example, if you try to show it works, then you'll use a valid postcode to ensure it returns a valid response

IEEE Standard Definitions of Software Testing

IEEE Standard 610 (1990): A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

IEEE Std 829-1983: Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.

 

Purpose of testing

The testing activity in an information system development can be defined as follows

Testing is a process of planning, preparing, executing and analyzing, aimed at establishing the characteristics of an information system, and demonstrating the difference between the actual status and the required status.

Test planning and preparation activities emphasize the fact that testing should not be regarded as a process that can be started when the object to be tested is delivered. A test process requires accurate planning and preparation phases before any measurement actions can be implemented

Testing reduces the level of uncertainty about the quality of a system. The level of testing effort depends on the risks involved in bringing this system in to operation, and on the decision of how much time and money is to be spent on reducing the level of uncertainty

Thursday, April 16, 2009

Software Installation/Uninstallation Testing

Software Installation/Uninstallation Testing

Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.

Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.

If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user's system badly damaged. User might require to reinstall the full operating system.

In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.

We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.

To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for creating basic images (use software's like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!

You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.

Installation testing tips with some broad test cases:

1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case. Installation testing

Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.

2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.

3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.

4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.

5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.

6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.

7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.

8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.

9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.

10) Use software's available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.

11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this "break of installation" on every installation step.

12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.

13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.

I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.

Hope this article will be a basic guideline to those having trouble to start with software installation testing both manually or in automation.

Website Cookie Testing, Test cases for testing web application cookies?

We will first focus on what exactly cookies are and how they work. It would be easy for you to understand the test cases for testing cookies when you have clear understanding of how cookies work? How cookies stored on hard drive? And how can we edit cookie settings?

What is Cookie?
Cookie is small information stored in text file on user's hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.

Why Cookies are used?
Cookies are nothing but the user's identity and used to track where the user navigated throughout the web site pages. The communication between web browser and web server is stateless.

For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to example.com web server for the page 1.html. Next time if you type page as http://www.example.com/2.html then new request is send to example.com web server for sending 2.html page and web server don't know anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.

How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol do keep some history of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine and lasts for months or years.

Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored depends on the browser. Different browsers store cookie in different paths. E.g. Internet explorer store cookies on path "C:\Documents and Settings\Default User\Cookies"
Here the "Default User" can be replaced by the current user you logged in as. Like "Administrator", or user name like "Vijay" etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools->Options->Privacy and then "Show cookies" button.

How cookies are stored?
Lets take example of cookie written by rediff.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail account, a cookie will get written on your Hard disk. To view this cookie simply click on "Show cookies" button mentioned on above path. Click on Rediff.com site under this cookie list. You can see different cookies written by rediff domain with different names.

Site: Rediff.com Cookie name: RMID
Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM

Applications where cookies can be used:

1) To implement shopping cart:
Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user adds some products in their shopping cart and if due to some reason user don't want to buy those products this time and closes the browser window? When next time same user visits the purchase page he can see all the products he added in shopping cart in his last visit.

2) Personalized sites:
When user visits certain pages they are asked which pages they don't want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.

3) User tracking:
To track number of unique visitors online at particular time.

4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies control these advertisements. When and which advertisement should be shown? What is the interest of the user? Which keywords he searches on the site? All these things can be maintained using cookies.

5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.

Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to warn before writing any cookie or disabled the cookies completely then site containing cookie will be completely disabled and can not perform any operation resulting in loss of site traffic.

2) Too many Cookies:
If you are writing too many cookies on every page navigation and if user has turned on option to warn before writing cookie, this could turn away user from your site.

3) Security issues:
Some times users personal information is stored in cookies and if someone hack the cookie then hacker can get access to your personal information. Even corrupted cookies can be read by different domains and lead to security issues.

4) Sensitive information:
Some sites may write and store your sensitive information in cookies, which should not be allowed due to privacy concerns.

This should be enough to know what cookies are. If you want more cookie info see Cookie Central page.

Some Major Test cases for web application cookie testing:

The first obvious test case is to test if your application is writing cookies properly on disk. You can use the Cookie Tester application also if you don't have any web application to test but you want to understand the cookie concept for testing.

Test cases: 

1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.

2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.

3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.

4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like "For smooth functioning of this site make sure that cookies are enabled on your browser". There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)

5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.

6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.

7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can't be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.

8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some 'action tracking' web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.

9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.

10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

These are some Major test cases to be considered while testing website cookies. You can write multiple test cases from these test cases by performing various combinations. If you have some different application scenario, you can mention your test cases in comments below.

Best software testing articles of 2008

The year 2008 was very productive for software testing help in terms of new subscribers and site traffic. We covered many interesting and (I hope) helpful articles in this year.

Here is the recap of some popular posts from year 2008. I know it's very difficult to select few posts to show here. Still these are some most popular posts, in random order, you can enjoy. Don't forget to bookmark this page :-)


How to get your all bugs resolved without any 'Invalid bug' label?
I hate "Invalid bug" label from developers for the bugs reported by me, do you? I think every tester should try to get his/her all bugs resolved. This requires bug reporting skill. Check out this article to know what troubleshooting you need to do before reporting any bug.

Software testing questions and answers
This was a very successful article on reader's queries on software testing. Read the answers and ping me if you have any questions.

Learning basics of QTP automation tool and preparation of QTP interview questions
This post is in continuation with QTP interview questions series. These questions will help for preparing interview as well as learning the QTP basics.

Developers are not good testers. What you say?
Developers test their own code. Then why testers needed? What are the drawbacks of developer testing his own code? Why can't it be a success? If developer testing is always not sufficient testing then what things developers should test and what the test team should? To know answers to these questions read on.

Top 20 practical software testing tips you should read before testing any application.
This is a collection of top 20 practical testing tips for testing any product or web based application I learned over time. I wish all testers read these software testing good practices and try to implement them in your day to day software testing activities.

Global Software Testing business to reach $13 Billion - Good news for Indian software Testers
The most frequently asked questions to me till date are - What is the future of software testing business?, Should I consider software testing as my career option? - Now you don't need to ask these question to me any more. See what was the good news in this post.

Tips to design test data before executing your test cases
I have mentioned importance of proper test data in many of my previous articles. Tester should check and update the test data before execution of any test case. In this article I will provide tips on how to prepare test environment so that any important test case will not be missed.

Check your eligibility for CSTE certification. Take this sample CSTE examination
CSTE testing certification is the basic certification to check testers skill and understanding of software testing theory and software testing practices. If you are applying for CSTE certification check if you can answer at least 75% of the test questions mentioned in this post.

What you need to know about BVT (Build Verification Testing)
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. Read on to know how to perform BVT effectively.

Manual and Automation testing Challenges
Software testing is full of challenges. Testers face many challenges in manual as well as automation testing. Tester who manages to address these challenges effectively can become successful tester. In this article I have included most of the testing challenges you need to overcome.

Smoke testing and sanity testing - Quick and simple differences
Despite of hundreds of web articles on Smoke and sanity testing, many people still have confusion between these terms and keep on asking to me. Here is a simple and understandable difference that can clear your confusion between smoke testing and sanity testing.

An approach for Security Testing of Web Applications
How to make sure your web application is secure before release? Web site security testing is important part of software testing life cycle like other functionality and performance testing. This article will guide you with different type of attacks on web applications and information on how to test web application for security.

Some Interesting Articles on Software Testing Career:

How to prepare for software testing interview
This article will help for preparation of software testing interview for freshers as well as working testing professional who want to switch their current job. Know the key areas you need to prepare and how to keep yourself updated on testing methodologies.

Career options for Software Test Professionals
See what are the career options for software testing professionals. Great article showing all the possible career paths for software tester.

Software Testing Advice for Novice Testers
Novice testers have many questions about software testing and the actual work that they are going to perform.  As novice testers, you should be aware of certain facts in the software testing profession.  The tips mentioned here will certainly help to advance you in your software-testing career

Money making, software testing career and secrets of a richest tester
These days a lot of people who pass out of engineering and science colleges are interested about software testing as a career. Also today there isn't a huge difference between what testers and developers get paid. How can testers make more money than what they have been making.

How to keep motivation alive in software testers?
The title says all about the post content. Know what are the different ways to keep motivation alive in software testers.

How to build a successful QA team?
There are plenty of things to be considered while building successful software testing team. After reading this article look at your team and question yourself "Are you working in great test team" or " Will you make every effort to build great test team".

Apart from software testing articles we also covered some topics on soft skills for testers:

How to ask for promotion and salary raise in this appraisal?
Yearly performance appraisal review is the key process to major employee performance and reward him/her by promotion or salary raise based on the performance. If you think you are eligible for this reward then read this article to know on what basis your performance is majored and how to put your best efforts to get salary raise and promotion.

How to keep good testers in testing positions?
Here I have answered one interesting reader's question "how to keep good testers in testing positions?" Nowadays due to high compensation packages it's really hard to keep good testers in testing. Many of the really skilled testers are always looking for switch. Here are some ideas on how to keep good testers in testing positions.

Top Three Tips to Survive in this Recession - Economic Downtime
Everyone is talking about recession. Many of your close friends might have experienced this. Everyday we hear news about pink slips, reduction in IT recruitment, bleak prospects etc. How software testers can survive in this recession? Here are three simple and effective tips to survive in this recession.

How to crack the GD (Group Discussion). 10 simple ways with ppt on GD
Software tester needs to communicate with project members like team members, developers, managers and customer. For an effective team player you should have command over communication and interpersonal skill. Read the top 10 simple ways to crack the GD.

Soft Skill for testers: How to improve communication skill
Good communication skill is a must for software testers. You might have seen this line in every job requirements especially openings in QA and testing field. As a tester you need to communicate with different project team members including clients, communication skill plays important role. Read this post if you want to improve your communication skill.

Basics of software testing [download]

I am in process to compile a list of good books on software testing. Soon I will share this list with you. But lately I am getting too many requests to share any book on software testing for preparing software testing interviews. So here is a quick post to share an online testing book I found "A Software Testing Primer" by Nick Jenkins.

Basically this book is an introduction to software testing. So those who are new to software testing field can start their preparation by reading this book. You will get basic idea of manual and automation testing.

Here is a summary of what this book is covering:

  • What is the need of software testing?
  • Different software development models
  • Testing in the software development life cycle
  • How to develop testing mindset?
  • Regression Vs. Retesting
  • White box Vs. Black box testing
  • Verification and validation
  • Alpha and beta testing
  • Unit, Integration and System testing
  • Acceptance testing
  • Automation testing - Basics
  • Testing the design
  • Usability testing
  • Performance testing
  • Test planning
  • Test estimation
  • Test cases and elements of test cases
  • Test tracking, Test planning and Test plan review
  • How to manage defects and defect reports?
  • Test metrics for testers
  • Product release control

In all this book is a nice introduction to software testing. Author explained some key software testing concepts like Regression and Retesting difference, Alpha and beta testing etc. where many testers get confused. 

Download "Testing Primer" book:
To download this book Click here

Learning Basics of Rational Robot - IBM Test automation tool

Learning Basics of  Rational Robot (7.0)

1. Features of Rational Robot
Rational Robot is an automated functional, regression testing tool for automating Windows, Java, IE and ERP applications under windows platform.  Rational Robot provides test cases for common objects such as menus, lists, bitmaps and specialized test cases for objects specific to the development environment. It integrates with tools like Rational Test Manager, Rational Clearquest and Requisite Pro in the Rational Unified Processor for Defect Tracking, Change Management and Requirement Traceability. It also supports UI technologies like Java, the Web, all VS.NET controls, Oracle Forms, Borland Delphi and Sybase Power Builder applications.

2. Rational Administrator
It is a tool for managing associations between Rational artifacts such as Test Datastores, Requisite Pro projects and Rose models.

  • Rational Projects are created using Rational Administrator
  • Users and Groups can be maintained
  • Project assets can be upgraded

3. Recording Options
Using Object oriented technology, Robot identifies an object by its name property not by its location coordinates. There are two different options

  • GUI  -   Functional Testing
  • VU   -   Performance Testing

4. SQABasic language
SQABasic is similar to Microsoft Visual Basic. All the scripts will be in scriptname.rec format.  When you playback the script, Robot automatically compiles and runs the script, which repeats your actions and executes the verification points.

5    Shell Scripts
It is a master script that calls other automated scripts and plays them back in sequence. "callscript  test1" is a command to call script named test1. Combined into a single shell script, scripts can run in unattended mode and perform comprehensive test coverage.  It centralizes test results into one test log.

6    Low level Recording
Turn "Low Level Recording On"  in Robot during recording, mouse and keyboard actions are automatically stored in an external file.

7    Verification Points
Verification points verify that a certain action has taken place, or verify the state of an object. There are 11 Verification points in Robot

  • Alpha-numeric : Verifies alphanumeric data.  Used for edit boxes, pushbuttons,    labels, text fields, etc.,
  • Object Properties: Tests object attributes such as color, font and position.
  • Menu: Verifies the menu values and optionally their state (enabled or disabled) of a window
  • Clip Board: Verifies the contents of the windows clipboard
  • Window Existence:  Tests to see if a particular window does or does not exist on the screen.
  • Region Image: Graphically compares an area of the screen you specify
  • Window Image: Graphically compares an entire window such as a window box.
  • Object Data: Test data contents of objects(eg. Dropdown)
  • File Comparison: Compares the contents of the two files (size and the contents)
  • File Existence: Checks for the existence of a specified file
  • Module Existence: Used to verify whether a specified module is loaded into a specified context, or loaded anywhere in memory.

When you are creating verification points, there will be two options – Wait State and expected Results.
Wait states are useful when AUT requires an unknown amount of time to complete a task. Using a wait state keeps the verification point form failing if the task is not completed immediately or if the data is not accessible immediately.
Expected Results – Click Pass or Fail in the Verification Point Name dialog box.

8    Variable Window
During debugging, if you want to examine variable and constant values, you can variables window. View->Variables.

9.    Object Mapping
If AUT contains a custom object or any object that Robot does not recognize, you
can create a custom object mapping before start recording.  By adding the object's
class to the list of classes that Robot recognizes, and then associating the class to a
standard object type. Robot saves this custom class/object type mapping in the
project and uses it to identify the custom object during playback.

10    Debug Tools

Animate(F11) – Animation mode allows you to see each line of script as it executes.
Step Over(F10) – Use to execute a single command line within a script
Step Into(F8) – Use to being single step execution
Step Out(F7) – Use to step out of the called script and return to the calling script.
Go Until Cursor(F6) – Use to play back the active GUI script, stopping at the text cursor location.

11    Library Files and Header Files
Header files have .sbh extensions and contain the procedure declarations and global variables referred to in your script files.  There are two types of library files. Those with .sbl extensions can't have verification points. Those with .rec extensions are stored in the project and can have verification points. Both Header and library are in \SQABAS32 in the project directory.

12    Image Masks used for dynamic objects
Image masks are used to hide an area of the screen. When you play back a script that contains an Image VP and a mask, Robot ignores the masked area when comparing actual results to the recorded baseline.

13    Data Pool
A Datapool is a test dataset that supplies data variables in a test script during playback.  Using datapools allows you to run multiple iterations of a script using different data each time.  It can be created and managed using Test Manager for data driven tests.

 14. Important Web Site for Rational Robot trial version download and Rational Robot tutorial:
http://www.ibm.com/developerworks/rational/downloads/

Types of Risks in Software Projects

Are you developing any Test plan or test strategy for your project? Have you addressed all risks properly in your test plan or test strategy?

As testing is the last part of the project, it's always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.

What is Risk?
"Risk are future uncertain events with a probability of occurrence and a potential for loss"

Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.

In this article I will cover what are the "types of risks". In next articles I will try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.

Categories of risks:

Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:

  • Wrong time estimation
  •  Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
  •  Failure to identify complex functionalities and time required to develop those functionalities.
  •  Unexpected project scope expansions.

Budget Risk:

  •  Wrong budget estimation.
  •  Cost overruns
  •  Project scope expansion

Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:

  •  Failure to address priority conflicts
  •  Failure to resolve the responsibilities
  •  Insufficient resources
  •  No proper subject training
  •  No resource planning
  •  No communication in team.

Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:

  •  Continuous changing requirements
  •  No advanced technology available or the existing technology is in initial stages.
  •  Product is complex to implement.
  •  Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:

  •   Running out of fund.
  •   Market development
  •   Changing customer product strategy and priority
  •   Government rule changes.

Learning basics of QTP automation tool and preparation of QTP interview questions

Quick Test Professional: Interview Questions and answers.

1. What are the features and benefits of Quick Test Pro(QTP)?

1. Key word driven testing
2. Suitable for both client server and web based application
3. VB script as the script language
4. Better error handling mechanism
5. Excellent data driven testing features

2. How to handle the exceptions using recovery scenario manager in QTP?

You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps
1. Triggered Events
2. Recovery steps
3. Post Recovery Test-Run

3. What is the use of Text output value in QTP?

Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.

4. How to use the Object spy in QTP 8.0 version?

There are two ways to Spy the objects in QTP
1) Thru file toolbar: In the File ToolBar click on the last toolbar button (an icon showing a person with hat).
2) Thru Object repository Dialog: In Objectrepository dialog click on the button "object spy…" In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object. If at all the object is not visible or window is minimized then hold the Ctrl button and activate the required window to and release the Ctrl button.

5. What is the file extension of the code file and object repository file in QTP?
File extension of
Per test object rep: filename.mtr
Shared Object rep: filename.tsr
Code file extension id: script.mts

6. Explain the concept of object repository and how QTP recognizes objects?

Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected).
we can view or modify the test object description of any test object in the repository or to add new objects to the repository.
Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordianl identifier such as objects location on the page or in the source code.

7. What are the properties you would use for identifying a browser and page when using descriptive programming?

"name" would be another property apart from "title" that we can use. OR
We can also use the property "micClass".
ex: Browser("micClass:=browser").page("micClass:=page")

8. What are the different scripting languages you could use when working with QTP?

You can write scripts using following languages:
Visual Basic (VB), XML, JavaScript, Java, HTML

9. Tell some commonly used Excel VBA functions.

Common functions are:
Coloring the cell, Auto fit cell, setting navigation from link in one cell to other saving

10. Explain the keyword createobject with an example.

Creates and returns a reference to an Automation object
syntax: CreateObject(servername.typename [, location])
Arguments
servername:Required. The name of the application providing the object.
typename : Required. The type or class of the object to create.
location : Optional. The name of the network server where the object is to be created.

11. Explain in brief about the QTP Automation Object Model.

Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.

12. How to handle dynamic objects in QTP?

QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognize the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time.
Check out this:
If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object.
While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails.

The Smart Identification mechanism uses two types of properties:
Base filter properties - The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link's tag was changed from to any other value, you could no longer call it the same object. Optional filter properties - Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.

13. What is a Run-Time Data Table? Where can I find and view this table?

In QTP, there is data table used, which is used at runtime.
-In QTP, select the option View->Data table.
-This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.

14. How does Parameterization and Data-Driving relate to each other in QTP?

To data driven we have to parameterize. i.e. we have to make the constant value as parameter, so that in each interaction(cycle) it takes a value that is supplied in run-time data table. Through parameterization only we can drive a transaction (action) with different sets of data. You know running the script with the same set of data several times is not suggested, and it's also of no use.

15. What is the difference between Call to Action and Copy Action.?

Call to Action: The changes made in Call to Action, will be reflected in the original action (from where the script is called). But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)

16. Explain the concept of how QTP identifies object.

During recording qtp looks at the object and stores it as test object. For each test object QT learns a set of default properties called mandatory properties, and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run, QTP searches for the run time objects that matches with the test object it learned while recording.

17. Differentiate the two Object Repository Types of QTP.

Object repository is used to store all the objects in the application being tested.
Types of object repository: Per action and shared repository.
In shared repository only one centralized repository for all the tests. where as in per action for each test a separate per action repository is created.

18. What the differences are and best practical application of Object Repository?

Per Action: For Each Action, one Object Repository is created.
Shared: One Object Repository is used by entire application

19. Explain what the difference between Shared Repository and Per Action Repository

Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner
Per Action: For each Action, one Object Repository is created, like GUI map file per test in WinRunner

20. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.

Sample answer (You can tell about modules you worked on. If your answer is Yes then You should expect more questions and should be able to explain those modules in later questions): I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.

21. Can you do more than just capture and playback?

Sample answer (Say Yes only if you worked on): I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL.
-It was done by the windows scripting using the DOM(Document Object Model) of the windows.

22. How to do the scripting. Are there any inbuilt functions in QTP? What is the difference between them? How to handle script issues?

Yes, there's an in-built functionality called "Step Generator" in Insert->Step->Step Generator -F7, which will generate the scripts as you enter the appropriate steps.

23. What is the difference between check point and output value?

An output value is a value captured during the test run and entered in the run-time but to a specified location.
EX:-Location in Data Table[Global sheet / local sheet]

24. How many types of Actions are there in QTP?

There are three kinds of actions:
Non-reusable action - An action that can be called only in the test with which it is stored, and can be called only once.
Reusable action - An action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests.
External action - A reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.

25. I want to open a Notepad window without recording a test and I do not want to use System utility Run command as well. How do I do this?

You can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad "( i.e. where the notepad.exe is stored in the system) in the "Windows Applications Tab" of the "Record and Run Settings window.