The report below tests the prediction algorithm, which generates weights for the heatmap at a given location. Specifically, it tests the code in predict.js, which contains three main functions: getWeights, readTagWeights, and calculateWeights.
Basically, the function getWeights accepts the coordinates for the current map area, makes an API call to get the tag matrix for the current area, calls readTagWeights to get a map with location tags and their corresponding weights and radii, calls calculateWeights to turn the tag matrix into a weight matrix, and returns the weight matrix for the heatmap to display.
Note: The current test suit only covers the predict.js file. We attempted to test other files with Jest, but that was unsuccessful due to some compatibility issues between the frameworks that we used and the frameworks that the other main developer used. We attempted to resolve this, but found that documentation and resources were sparse, and the other developer could not help much. As such, we could only test the parts of the project that we have full control over (predict.js and the heatmap). We will continue to work on this issue in future updates.
The results represent a moderately healthy test suite. We have full coverage of the predict.js file and the tests cover multiple cases (such as empty, zero, sparse, dense inputs) and can be run relatively quickly. The only issues are that it can be difficult to write new test cases (determining expected outputs often requires tedious manual calculations) and that the test code is not terribly concise (especially for the integration test, which uses realistic data and expected outputs).
Out of all the code in predict.js, readTagWeights has the least comprehensive tests, mostly because it is very simple and barely worth testing. The function simply returns a hard-coded map of location tags and their frequencies and radii. Even if we were to modify this to be updated by a machine learning algorithm (which we plan to do), it would still be very difficult to write a unit test for this, because we wouldn’t be able to know whether the values are correct until we display them to the heatmap and visually confirm the results.
Our top testing priority is that the data is being displayed correctly on the heatmap. Part of this is making sure that the map can still function even if there are errors or the API gives an empty response. This is crucial because the heatmap is the most visible component to users, and a failure could negatively impact the usability and popularity of the site, neither of which are desirable for the client. Components such as the menu and buttons are not as important because the user can still use the main functionalities without them.
Our next priority is that the data is being handled correctly once it has been pulled from the API. The prediction algorithm is one of the things that makes the site unique, and it is also one of main functionalities of the site. Users would not stay for long if the predictions were incorrect too often.
Although we’ve managed to achieve full code coverage, that doesn’t mean that we don’t have questions about testing. The biggest issue is that these functions work with large inputs and large outputs, so writing tests for them requires us to work with large inputs, large outputs, and very precise numbers. That makes the process of creating tests exceptionally tedious. This might not be an issue in and of itself, but due to the rather mathematical nature of the algorithm, we need to write a lot of tests to check that the formulas are correct. Is there a way to automate test-writing or create bulk tests? We’ll have to look into this for future updates.