D3. Test Plan

1. Introduction

1.1. Purpose

To ensure the accurate identification of company traits (sector, customers, strategies, etc.) from the provided company links.

1.2. Scope

The testing will cover the end-to-end functionality of the company traits identification software, including the ML model, sublink analysis, and the overall output.

1.3. Testing Objectives

  • Validate the accuracy of the ML model in identifying relevant keywords
  • Ensure the sublink analysis correctly identifies location for additional company information
  • Verify the aggregation and presentation of the final company traits in a formatted database-like system

1.4. Assumptions and Constraints

The ML model has been pre-trained on a representative dataset of company websites

Sufficient test data (company links) is available to cover a variety of sectors and company types

2. Test Strategy

2.1. Testing Approach

  • Unit testing of the ML model, sublink analysis, and data aggregation components
  • Integration testing to validate the end-to-end workflow
  • System testing to evaluate the overall functionality and accuracy of the company traits identification 

2.2. Test Methodology

  • Black-box testing to validate the software’s external behavior
  • White-box testing to ensure the internal logic and implementation are correct 

2.3. Test Techniques

  • Compare human results to software-generated results
  • Edge case analysis for unexpected inputs
  • Random testing to cover unexpected scenarios 

2.4. Test Deliverables

  • Test cases and test scripts
  • Test execution reports

Test Environment

Hardware Requirements

Sufficient computing power to run the ML model and sublink analysis 

Software Requirements

Operating system: Windows 10/11, macOS, or any modern Linux Distribution

Programming languages and frameworks: Python, TensorFlow/PyTorch 

Third-Party Tools and Libraries:

  • ML framework (e.g., TensorFlow, PyTorch)
  • Web scraping library (Zenrows)
  • Data processing and analysis tools (e.g., Pandas, Numpy)

Test Data Requirements

  • A diverse set of company websites covering various sectors and company types
  • Ground truth data for the expected company traits to validate the model’s accuracy

Test Cases

Test Case Design

  • Identify test cases based on the software’s functional requirements
  • Define test cases for both valid and invalid inputs
  • Prioritize test cases based on their importance and risk

Test Execution Approach

  • Automate test cases where possible to ensure repeatability and efficiency
  • Perform manual testing for complex scenarios and edge cases 

Test Monitoring and Control

  • Track test progress and identify any roadblocks or issues
  • Maintain test logs and reports for each test execution

Test Logs and Reports

  • Test case execution logs
  • Defect reports and their resolution status
  • Final test summary report

Roles and Responsibilities

Test Team Organization

Test Manager (Oversees the entire testing process): Akshay

Test Analyst (Designs test cases and test scenarios): Victor

Test Engineer (Implements test automation and executes tests): Sheel

Quality Assurance (Validates the overall quality of the software): Kyle