Skip to main content

To develop a framework and process for evaluation that is sustainable and applicable across the institution’s training programs, we recommend the Framework for Program Evaluation described by the Centers for Disease Control and Prevention.

For T32 programs interested in improving their program evaluations, we suggest the following steps:

1) Engage key stakeholders, including those involved in program operations (e.g., Division/Department chairs, other university/partner institutions, support staff), those served by the program (e.g., trainees, mentors), and those who will use the evaluation data (e.g., program directors, funding agencies). These individuals will be critical throughout the development and evaluation process for providing data about importance, goals, needs, and impact of the programs.

2) Describe the program, which should involve the creation of a logic model with program theory that details the process by which you believe the program will achieve its long-term outcomes (see Figure 1 snip below). This includes clearly defining the short-term, and long-term outcomes, inputs, outputs, and activities of the program.

Snippet of a T32 logic model. Click the figure to learn more about logic models.

 

3) Focus the evaluation design, including clearly defining the purpose of the evaluation, users of the evaluation, and potential use of the evaluation results (e.g., document success, identify program improvement, determine resource allocation, solicit more funds).

4) Gather credible evidence by establishing indicators for activities and outcomes identified during the evaluation design, determining which existing indicators to use and which ones to develop, selecting and developing related data collection methods, piloting test new tools, developing a protocol for data collection, and collecting necessary data.

  • Outcomes evaluated could include research grant applications, grant awards, job placement, career satisfaction, time-to-degree, fellowships, and publications and presentations (https://www.nigms.nih.gov/about/dima/Pages/reports.aspx). Throughout this process, work with key stakeholders as needed and utilize strategies to promote quality through rigor, reliability, validity, and trustworthiness.
  • Diversity and inclusion outcomes are important to evaluate, from applications to trainee selection to retention and program completion. In addition, programs may want to assess climate with a diversity and inclusion lens, however it is important to conduct these analyses at a high level to ensure protection of identities and accurate reporting as small programs may not have a large enough sample to provide anonymity. Nonetheless, programs may choose to assess related aspects of program characteristics such as support and sense of belonging.

5) Justify conclusions once data are analyzed through the use of error-checking, contextualization, triangulation with literature and peer programs, divergent thinking, comparative analysis (e.g., existing standards, previous years, intended outcomes), bias documentation, and disclosure of limitations.

6) Ensure use of evaluation data and share lessons learned to improve your program, demonstrate your success, identify new partners for collaboration, and promote your approach to other related pre- and postdoctoral training programs.

The systematic, iterative, evidence-based approach to program evaluation described here will help ensure that programs are designed to align with needs of the trainees, unit, university, and job market, and to remain contemporary over time. The involvement of stakeholders across multiple programs improves the likelihood that the framework and subsequently designed evaluation tools will be applicable across a wide range of pre- and postdoctoral training programs. This process will also help elucidate areas in which tailored evaluations are necessary due to discipline- or program-specific goals.