D3: Test Plan

If We had All the Time in the World

Unit Testing

For our database, unit testing would test the functionality of the service files used to interface with the database using Pytest. Each function would have a different test case written to ensure they do what is needed and return the proper values.

Manual Testing

For our API, we would test each endpoint using FastAPI’s docs page. This helps us to ensure each endpoint is calling the correct service functions, accepting the correct data type, and returning the correct response.

For our machine learning algorithm, we would simply test if it is generating images based on the data that’s inputted, as our client has expressed that getting something accurate isn’t that important.

Integration and System Testing

We would want to test that our app is doing everything that we want it to, registering new users, retrieving user data, generating images, receiving images from the headset. We would likely do this by using our app on all the platforms we want to support, iPhone, Android Phone, and a PC web app.

Performance/Reliability Testing

Given time we would want to test our app with a small number of users for a few weeks and see how it holds up and make necessary adjustments. Once the app performs well with that number of users, we would scale the number of users up until we ensure that the app performs with the number of users the client wants the app to be used with.

Acceptance Testing

Given time we would give our app to the client and allow them to use it for a few weeks. After that time period of usage, we would collect feedback and implement any changes the client would want, with anything urgent being taken before the usage period ends. We would continue to do this until the client is satisfied with the product.

What We’re Actually Doing

Unit Testing

We’re going to do unit testing the same way described before:

For our database, we’ll test the functionality of the service files used to interface with the database using Pytest. Each function will have a different test case written to ensure they do what is needed and return the proper values.

Manual Testing

For our API, we’ll test each endpoint using FastAPI’s docs page. This helps us to ensure each endpoint is calling the correct service functions, accepting the correct data type, and returning the correct response.

For our machine learning algorithm, we’ll simply test if it is generating images based on the data that’s inputted, as our client has expressed that getting something accurate isn’t that important. Due to the team’s limited knowledge of AI, we will probably test the major functions of the DreamDiffusion framework (e.g the function responsible for generating images) as opposed to testing helper functions and other modular pieces of code.

Integration and System Testing

We’ll want to test that our app is doing everything that we want it to, registering new users, retrieving user data, generating images, receiving images from the headset. We would likely do this by using our app on all the platforms we want to support, iPhone, Android Phone, and a PC web app. For this, we’ll use Expo Go to test on our own devices that everything works properly when integrated together.

In regards to the AI portion of our project, our main objective will be to verify the seamless integration of the DreamDiffusion framework into our application. We will ensure that the framework is able to communicate with the rest of our app via exposed APIs and that the data recorded via the Muse SDK headset is properly propagated to the AI model.

Performance/Reliability Testing

Since we have a limited amount of time we can only really test if our app works properly on a small number of devices which we can test. We will simulate server errors and handle them accordingly. We will also simulate concurrent user sessions to stress test the application, but given the time constraints we can only simulate 2-3 concurrent sessions.

As of right now the team plans on hosting the AI model on Google Cloud Platform. Therefor, we have plans of testing how well the app behaves under varriyng traffic volumes, when given inconsistent/undesirable data (brainwaves) and other server related issues.

Acceptance Testing

We want this testing to confirm that the application meets specified requirements. We’ll confirm that the application aligns with some of the acceptance criteria, collaborating with stakeholders to define modified acceptance criteria. We will validate end-to-end functionality, including the accuracy of brainwave-to-image conversion as defined by our stakeholders. Overall, we plan on simulating a production like environment in order to determine if the application is production ready.

Tools Used

Pytest: a Python automated testing framework (in the realm of JUnit). Allows for the automatic running of prewritten tests

FastAPI docs page: a page accessed through uvicorn (included with FastAPI) that provides a UI for testing api endpoints

Types of End Users

Caregivers will be the main user of our app. They are college students working with CareYaya to take care of an elderly person (hereafter referred to as “patient”). They’ll use our app to in conjunction with the Muse headset to generate images based on the dreams of their patient and start conversations with them.

Doctors and researchers will be the administrative users of our app. They are CareYaya employees who want to use the data collected by Muse headsets to ensure the health of the patients. They’ll use our app to look at that data.