Developing the Research Impact Canada (RIC) Logic Model and Evaluation Framework: A Case Study Example

by Anne Bergen 

In 2017-2018, I helped the RIC Evaluation Committee develop a logic model and evaluation framework. This brief case study, originally posted at Research Impact Canada, describes that project.

Summary

This brief case study example describes a project working with Research Impact Canada (RIC) to create an evaluation framework, beginning with co-creation of a logic model.

Research Impact Canada (RIC) is a network of 16 universities dedicated to using knowledge mobilization to create research impact for the public good. In this project, a small group (the RIC evaluation committee) collaborated through four online workshops to define an evaluation framework that represented the shared knowledge mobilization activities of the larger collaboration, and aligned the framework with goals from RIC’s strategic plan.

This work expanded RIC’s past evaluation approach that focused on monitoring through counting outputs (the products of RIC activities). The current framework explains how activities link to short-term outcomes of improved knowledge, skills, and attitudes, which contribute to longer-term changes of improved individual practice, organizational capacity, and systems support.

Key Messages of this case study

  • Work together in a small group to create a collaborative and iterative logic model and evaluation framework, where the group represents the diverse perspectives of a partnership initiative.
  • Create a logic model that maps out target audiences, common activities, what success looks like (outputs, outcomes, impacts), and enabling conditions.
  • Then use that logic model to guide decisions around what and when to measure. Instead of trying to measure everything, focus what’s important and what’s feasible.

Read the full case study: http://researchimpact.ca/developing-the-research-impact-canada-ric-logic-model-and-evaluation-framework/

Secondary Dissemination

This past year, I taught a number of training workshops about knowledge mobilization, applied research, and evaluation. I have some plans to combine and collate these presentations into something exciting, but in the meantime I’m posting some of the workshop materials here as open access resources.

by Anne Bergen

The first set of workshop slides come from two full-day workshops hosted by Gambling Research Exchange Ontario (formerly the Ontario Problem Gambling Research Centre). These workshops were part of a series of capacity building sessions for graduate students and postdoctoral researchers.

The fall 2014 workshop “Skills and Ideas for #ProblemGamblingKTE” was designed  to give participants tools to carry out knowledge translation and exchange (KTE) activities. Students in this workshop:

  • practiced identifying target audiences and stakeholder groups and analyzing contextual factors relevant to KTE activities;
  • created and presented plans for engaging and working with stakeholders and decision makers;
  • translated key research messages using plain language writing and selected communication channels and social media tools to maximize research access and impact.

The winter 2015 workshop “Evaluating Problem Gambling KTE” provided participants with tools for monitoring and evaluation of research impact from knowledge transfer and exchange (KTE) activities. During this workshop, students:

  • created an evaluation theory of change or logic model for problem gambling KTE;
  • learned to consider goals, values, and context to decide on an appropriate evaluation approach;
  • identified indicators and measures for KTE evaluation. Continue reading