Monday, June 4, 2012

Testing Interview Questions

Software Testing Interview Questions – basic

Here is a list of commonly asked basic level Software Testing Interview Questions.You must prepare about different types of testing and about the commonly used Words in Software Testing before attending the Interview.
  1. What are the importance of Software Testing?
  2. What are the main tools you are used for Software Testing?
  3. What are the different types of Software Testing?
  4. What are the difference between Black Box and White Box testing?
  5. What are the difference between Manual Testing and Automated Testing ?
  6. What is Unit Testing ?
  7. What is Integration Testing ?
  8. What is acceptance testing ?
  9. What is Static testing?
  10. What is System testing?
  11. What is Load Testing?
  12. What is Smoke Testing?
  13. What is Soak Testing?
  14. What is Scalability Testing?
  15. What is Sanity Testing?
  16. What is Ramp Testing?
  17. What is Monkey Testing?
  18. What is Gray Box Testing?
  19. What is Functional Testing?
  20. What is Glass Box Testing?
  21. What is Dynamic Testing?
  22. What is Compatibility Testing?
  23. What is Concurrency Testing?
  24. What is Component Testing?
  25. What is Ad Hoc Testing?
  26. What is Agile Testing?
  27. What are the different phases in Software Testing?
  28. How you define defects and Bugs?
  29. What are the roles of a QA specialist?
  30. What is Test Case and Test Plan ?
  31. Tell me about Top Down and Bottom Up approaches in testing?
  32. Tell me about Software Testing Life cycle

REGRESSION TESTING

What is REGRESSION TESTING

Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.
Regression testing attempts to mitigate two risks:
  • A change that was intended to fix a bug failed.
  • Some change had a side effect, unfixing an old bug or introducing a new bug
Regression testing approaches differ in their focus. Common examples include:
Bug regression: We retest a specific bug that has been allegedly fixed.
Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.)
General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code. (This is the typical scope of automated regression testing.)
Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.)
Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn’t been changed–only the external components that the software under test must interact with.
Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests.
Smoke testing also known as build verification testing:A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn’t work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports.

LoadRunner Interview Questions

Here is a list of commonly asked LoadRunner interview questions.Don’t forget to prepare the answers for these questions before attending your interview.
  1. What is the difference between “Load Testing” and “Performance testing” ?
  2. How long you are using LoadRunner and which version you experienced with ?
  3. Tell me about step by step processes of Load Testing?
  4. What is the use of LoadRunner software and what are the main components of that software?
  5. What is the use of recording a script ? and how you will do that by LoadRunner?
  6. What is the role of Controller component in LoadRunner?
  7. What is the role of Virtual User Generator (VuGen) component?
  8. Describe about rendezvous point?
  9. Describe about scenario?
  10. What is called parameters?
  11. Tell me the difference between Automatic correlation and manual correlation?
  12. Have you set automatic correlation options? and How ?
  13. What is the role of Web_reg_save_param function?
  14. Tell me the main run-time settings changes you have done in LoadRunner?
  15. How do you perform functional testing under load?
  16. How you can increase the amount of Vusers/load on the server ?
  17. What is the purpose of running the Vuser as thread?
  18. What is the role of lr_abort function ?
  19. What is the use of Throughput graph?
  20. How you can detect the performance bottlenecks?
  21. Tell me about overlay graph and Correlate graph?
  22. What is the difference between standard log and extended log?
  23. How many types of goals in a goal oriented scenario ? And what are they ?

WinRunner Interview Questions


WinRunner Interview Questions

WinRunner is one of the popular tool used in software testing. Here is a list of commonly asked Interview Questions about WinRunner.
  1. What is the use of WinRunner in your project?
  2. What are the main stages of testing processes using WinRunner?
  3. What is a GUI map and what are the contents of a GUI map?
  4. How WinRunner can recognize objects in an application?
  5. What is called test scripts? And what are the contents of a test script?
  6. What can we evaluate test results using WinRunner?
  7. How you will perform the script debugging using WinRunner?
  8. How you are going to report the defects found in test run?
  9. What is the purpose of using Test Director software and How you are going to use with WinRunner?
  10. What is the use of recording and Describe about the types of recording available in WinRunner?
  11. What is the use of WinRunner Add-Ins?
  12. Is there any difference between GUI map and GUI map files?
  13. Which tool will use for viewing the contents of GUI maps?