NEOCODE

Software Testing MCQs - Part 2

1. Levels of Testing

1.1 Which testing level is performed by developers?

Correct Answer: C) Unit Testing

Explanation:
Unit Testing is the lowest level of testing where individual components or units of code are tested in isolation. Developers typically perform unit testing as they write the code, often using frameworks like JUnit (Java), NUnit (.NET), or pytest (Python). Other testing levels are usually performed by dedicated testers or QA teams.

1.2 Big Bang Integration Testing refers to:

Correct Answer: A) Integrating all modules at once and testing

Explanation:
Big Bang Integration is an approach where all or most of the modules are combined at once and tested as a whole. While simple to implement, it makes defect isolation difficult since failures could be anywhere in the system. The alternative approaches are incremental integration methods like Top-Down or Bottom-Up integration.

1.3 System Testing is performed by:

Correct Answer: B) Testers

Explanation:
System Testing is typically performed by dedicated testers or QA teams, not developers or end users. It evaluates the complete and integrated software system to verify that it meets specified requirements. This level of testing comes after unit and integration testing but before acceptance testing.

1.4 Which testing level validates business requirements?

Correct Answer: D) Acceptance Testing

Explanation:
Acceptance Testing is the final testing level that validates whether the software meets business requirements and is ready for deployment. It's typically performed by end users or business stakeholders and focuses on business workflows rather than technical implementation details.

1.5 Smoke Testing is also known as:

Correct Answer: B) Build Verification Testing

Explanation:
Smoke Testing, also called Build Verification Testing or Confidence Testing, is a preliminary test to check whether the most crucial functions of a software build work as expected. It's a shallow but wide test that helps determine if the build is stable enough for more thorough testing.

2. Test Cases

2.1 A good test case should be:

Correct Answer: B) Traceable to requirements

Explanation:
A good test case should be clear, concise, and traceable back to specific requirements or user stories. This ensures test coverage of all requirements and helps in impact analysis when requirements change. Test cases should be repeatable (not executed only once) and should avoid unnecessary complexity.

2.2 Which of the following is NOT part of a test case?

Correct Answer: C) Developer's Name

Explanation:
A standard test case typically includes: - Test Case ID - Description/Objective - Preconditions - Test Steps - Test Data - Expected Result - Actual Result (after execution) The developer's name is not a standard component of a test case, though the tester's name might be included in some formats.

2.3 Test Case Prioritization helps in:

Correct Answer: B) Executing the most critical test cases first

Explanation:
Test Case Prioritization is a technique to order test cases so that the most important or critical tests are executed early in the testing cycle. This helps in early detection of critical defects when time or resources are limited. It doesn't reduce the total number of test cases but optimizes their execution order.

2.4 Which technique is used to derive test cases from requirements?

Correct Answer: C) Use Case Testing

Explanation:
Use Case Testing is a systematic technique to derive test cases from requirements by analyzing use cases (descriptions of system behavior from an actor's perspective). Each use case scenario (main flow, alternate flows, exception flows) becomes the basis for test cases. The other options are less structured approaches.

2.5 A test case that checks for invalid inputs is an example of:

Correct Answer: B) Negative Testing

Explanation:
Negative Testing involves validating how the system handles invalid, unexpected, or abnormal inputs or conditions. It ensures the system fails gracefully and maintains security/stability when faced with improper usage. Positive Testing, in contrast, verifies expected behavior with valid inputs.