User testing data analysis

For a certificate program at the University of North Dakota, I planned, created, and tested a course on how to do a yoga handstand at home.

Sample users from my friends, family, and community tried out the course over a period of 2 and a half weeks. They received 5 direct reminder emails during this time and 11 automated emails from the LMS.

I analyzed the results, though they have limited applicability due to the small size of the group. Most of the assessments were valid, with difficulty and discrimination indexes in the desired ranges. Two exercises did have unacceptable difficulty and discrimination indexes and are highlighted in the image below.

One you see here, labelled “Upside Down L at the Wall” has all zeroes because no students completed it. Several of the exercises were activities where the learner scored themself and therefore aren’t a part of the data analysis.

Overall, most students scored very well on exercises as long as they did them, except for the handstand itself. Therefore, there were many perfect scores. This is desirable, as an extracurricular skill-building course.

In the hop-up instructions assessment, my highest performing student actually missed a step in the instructions and everyone else got it perfectly. Therefore, the hop-up instructions assessment shows as invalid. 

For the hop-up itself, everyone in both the high and low group got it perfectly. This perhaps means the exercise is less challenging than I anticipated.

The data analysis demonstrates the testing stage of design thinking. Having created my prototype course, actual users tried it out. I analyzed their results on the assessments, and they had the opportunity to provide feedback, like the following response to, “Would you recommend this course to someone else?”