D3 Test Plan

Test Plan Structure

Part I: What we would do if we had all the time that we really needed

Part II: What we are actually going to test

Unit Testing

  1. Ideal Testing
    1. With a more ideal time frame, we would use unit tests to verify the functionality of our backend code and database validation. We would create a comprehensive set of unit tests that would cover edge cases.
    2. If we had more time, we would also create an option to delete cards using our app’s Edit Form page and thus write tests for the DELETE functionality of our backend.
  2. Actual Testing
    1. Given the time frame that we are actually working with, we will be testing the following:
      1. Testing the RESTful API, including tests for successful GET calls and POST calls, including the case of creating an entirely new activity card and updating an existing activity card.
        1. Testing the GET calls ensures that we are successfully retrieving information from our database of activity cards on both our Edit Form and Home pages.
        2. Testing the POST calls ensures that our Edit Form is able to both make new activity cards and update existing activity cards

Resources:

Integration and System Testing

  1. Ideal Testing
    1. Our integration testing will primarily focus on testing the deployed app as a whole, as if we’re future users of the app.
    2. This approach will hit both of our frontend and backend code, as we’ll be directly interacting in the front end, which in turn will perform all of its project backend API calls.
    3. Specifically, we have a number of key features we need to examine on the frontend side:
      1. Filtering activities by characteristics (activity type, duration)
      2. Searching for activities by name
      3. Expanding an activity for detailed view
    4. Additionally, we’ll be testing all user interfaces on various screen resolutions
      1. iPhone (multiple versions)
      2. iPad (portrait and landscape)
      3. Android phones
      4. PC (for the edit form)
    5. We can also potentially write smaller integration tests for events throughout the frontend (e.g. what happens when you click submit on the micromoment activity creation form).
      1. This would require mocking out much of the functionality of our event handlers, and the testing would be focused on verifying that the correct handlers are called with the correct parameters.
      2. However, this would produce many extra complexities in cases where we need to verify that a new element is rendered into or removed from the DOM as the result of some action.
  2. Actual Testing
    1. The majority of the testing listed in part I is very possible and necessary.
    2. We plan to check that all main features work both on local hosting and after we’ve deployed our web app.
      1. This includes all the features listed previously on all the screen resolutions listed previously.
    3. The only portion we’ll skip will be the event testing. This type of automation would require complex event handler mocking and additional tools for verifying the elements rendered into the DOM, which would expand the scope of our project beyond a realistic time commitment.

Description of Tools Used

  1. Ideal Testing
    1. Visual Studio Code
    2. Express and Node.js backend
    3. MongoDB Atlas Database
    4. React.js frontend
    5. Ideally we would research and potentially try out both Jest and Mocha to see which is a better fit for our app in terms of a unit testing framework
  2. Actual Testing
    1. Visual Studio Code
    2. Express and Node.js backend
    3. MongoDB Atlas Database
    4. React.js frontend
    5. Jest as our unit testing framework

Description of Types of End Users

  1. Teacher
    1. This is the primary user for our app. Teachers are expected to use our app to pick out Micromoments in class to use with students. They require a user interface that’s easy to search through quickly during class and simple but clear activity descriptions for easy execution.
  2. Project Faculty Coordinator
    1. This is the other target user of our app. Project faculty coordinators will primarily be using the micromoment editing form to submit new activities and address feedback on existing ones to improve the user experience for teachers.

Usability Testing

  1. Ideal Testing
    1. Given more time for testing, we would ideally have several volunteers test the app while acting as different types of end users. These usability tests would be performed under the supervision of a team member, who would record information about the results of the usability tests and interview the volunteers about their general thoughts after using the web app.
    2. For each of the types of end users, we would have a different set of instructions for them to follow as they’re using the app to test its usability. Examples of the ideal usability testing scenarios are below:
      1. Teacher
        1. Open the homepage of the app on a mobile device
        2. Search for specific activity by name “_____”
        3. Open the activity card to see more information
        4. Start the activity and view the hint
        5. Close the activity card once the activity timer has completed
        6. On the home page, filter the activities by duration of activity
        7. Filter the activities by type “_____”
      2. Project Faculty Coordinator
        1. Open the edit form app on a desktop device
        2. Update an existing activity card (with more specific instructions)
        3. Create a new activity card (with more specific instructions)
        4. Open the homepage of the app on a mobile device to view the updated and new activities
  2. Actual Testing
    1. With our current time frame, we will create usability testing plans to be carried out by team members acting as different types of end users. A secondary team member will be present to supervise and record information about the results of the usability tests.
    2. Each type of end user will have a different set of instructions to follow when using the app. Usability test instructions will be similar to the examples written for the ideal testing scenario, with some limitations based on how much functionality we are able to implement by the end of the semester.

Performance, Reliability, etc. Testing

  1. Ideal Testing
    1. With enough time, we would set up automated performance and reliability testing for our website.
    2. There are many existing software-based tools for this purpose, such as K6 and JMeter. 
    3. Having programmatic performance and reliability tests would be the most comprehensive testing option but also require more time to setup.
  2. Actual Testing
    1. Realistically, most of our performance and reliability testing will be done by hand.
    2. Most of the app’s features aren’t computation-intensive, so setting up automated performance testing would require a large time commitment for relatively little payoff.
    3. For performance testing, we’ll run through most of our main features on various devices and verify that they take less than one second to run. These features include:
      1. Filtering activities by characteristics (activity type, duration)
      2. Searching for activities by name
      3. Expanding an activity for detailed view
      4. Opening and submitting the activity edit form.
    4. For reliability testing, we’ll repeatedly check that the website stays online and functional at intervals of several hours after deployment.

Acceptance Testing

  1. Ideal Testing
    1. In an ideal world with a longer time frame, we would test the app in a classroom setting with our client acting as the “Teacher” end user selecting an activity and using the app to facilitate it among his students.
  2. Actual Testing
    1. We will have a Zoom meeting with our client to test and review the functionality of our app and present him with access to all parts of the code and project. To ensure a smooth hand-off of the project, we will make a recording of the meeting for him to review in the future.