by Anne Bergen
In 2017-2018, I helped the RIC Evaluation Committee develop a logic model and evaluation framework. This brief case study, originally posted at Research Impact Canada, describes that project.
Summary
This brief case study example describes a project working with Research Impact Canada (RIC) to create an evaluation framework, beginning with co-creation of a logic model.
Research Impact Canada (RIC) is a network of 16 universities dedicated to using knowledge mobilization to create research impact for the public good. In this project, a small group (the RIC evaluation committee) collaborated through four online workshops to define an evaluation framework that represented the shared knowledge mobilization activities of the larger collaboration, and aligned the framework with goals from RIC’s strategic plan.
This work expanded RIC’s past evaluation approach that focused on monitoring through counting outputs (the products of RIC activities). The current framework explains how activities link to short-term outcomes of improved knowledge, skills, and attitudes, which contribute to longer-term changes of improved individual practice, organizational capacity, and systems support.
Key Messages of this case study
- Work together in a small group to create a collaborative and iterative logic model and evaluation framework, where the group represents the diverse perspectives of a partnership initiative.
- Create a logic model that maps out target audiences, common activities, what success looks like (outputs, outcomes, impacts), and enabling conditions.
- Then use that logic model to guide decisions around what and when to measure. Instead of trying to measure everything, focus what’s important and what’s feasible.
Read the full case study: http://researchimpact.ca/developing-the-research-impact-canada-ric-logic-model-and-evaluation-framework/