CALP Measurement and Evaluation – Learnings from Year One

Posted:17 May 2017

Author: Odette Lloyd, Community Learning Network

Found in: ,

Comments: 0

Recommendations: 1


Posted on behalf of the CALP Team at Advanced Eduation.

Hello everyone.Deanna (LEARN) and Amanda (GOA

The CALP Team had the pleasure of attending and participating in all 7 regional meetings this spring, and have truly appreciated the opportunities to connect with all of you, to hear about the great things that are happening in the adult learning field, and to see the CALP communities of practice in action. At this round of meetings, our team delivered a cracker barrel session in which we outlined some of the key learnings on outcomes measurement from the 2015/16 Final Report.

As you know, collecting and reporting on CALP outcomes measures was new for everyone this year, including our team. The change involved developing new tools and processes for all, and we were really impressed with the quality of the work in the first year. We know it was a steep learning curve and we want to thank you!

So, what is evaluation, and why is it so important?

In 2015, the Community Adult Learning Program implemented the CALP Logic Model/Outcomes-based Measurement and Evaluation Framework (see below). The Logic Model articulates a vision and outcomes for the program as a way of measuring the impact of public dollars.

CALP Logic Model
This approach allows us to collect the hard data needed to demonstrate the impact that the Community Adult Learning Program is having on our foundational learners.

Equally important is that your organizations can also use these measures to help you identify the effectiveness and impact of certain types of literacy and foundational learning opportunities (e.g. tutoring, learning activity and courses). Are there some types of learning opportunities that lead to better outcomes for learners? Are some facilitators seeing better results with their learners? This kind of information can be used to help you make future programming decisions.

Five Key Learnings from 2015/16 Final Reports

1. Correlation between Section D – Programming Areas and Section G – Evaluation:

One of our biggest lessons from this cycle was that we are collecting outcomes on individual learning opportunities rather than individual learners. This is why the total # of Participants reported on in each Outcomes category in Section G – Evaluation should correlate with the total # of adult learners reported in Section D – Programming Areas within each of the eight categories in Literacy and Foundational Learning. However, these numbers likely will not correlate to Section E – Adult Learner Demographics.

In Picture 1, Section D – Programming Areas, 10 learners participated in an Adult Literacy Book Club learning activity, and 7 learners participated in an Adult Basic Literacy Reading/Writing course. Therefore, the total number of adult learners in Adult Literacy is 17.

Picture 1

Programming Areas (picture 1)

In Picture 2, Section G – Evaluation, the “# Participants” in Adult Literacy in the two measures shown is also 17. This shows how the total numbers reported in Section D and Section G should correlate.

Picture 2

Evaluation Tabel (picture 2)

By collecting data for each learning opportunity rather than for each unique learner (as in Section E – Adult Learner Demographics), organizations are able to see which learning opportunities result in better outcomes and make program planning decisions accordingly. The outcomes need to be tracked for each learning opportunity that a learner participates in, even if they participate in more than one. For example, if a learner participated in three of your family literacy programs, you would measure all of the outcomes for that learner three times – once for each of the family literacy programs she participated in. Maybe the programs are structured differently, or have different facilitators. Over time, you would be able to see if one program resulted in better learner success than another.

2. “Participant” vs. “Registrant” vs. “Completer” vs. “Respondent”:

Participants are the total number of individuals who attended a meaningful number of the sessions within a learning opportunity, which must be enough for facilitators or instructors to observe and measure progress. Individuals do not have to participate in all sessions or complete the program to count as participants. Organizations should create a business rule for what qualifies an individual to be counted as a “participant” and empower facilitators/instructors to use their judgement.

If 10 individuals register for a program but only 7 show up the first session and stay for most of the duration of the learning opportunity, you would report 7 total participants not 10. The number of Registrants in this case is irrelevant. It does not matter how many registered in a program, only how many attended a meaningful number of sessions for facilitators to observe progress.

Completing a learning opportunity is only one of the measures that we report. Completing does not necessarily require 100% attendance, as completing may also mean a learner achieved his/her personal goal as identified in a learning plan. Organizations should create a business rule for what qualifies as “completing” and empower facilitators/instructors to use their judgement.

Respondents are those individuals who reply to your survey or feedback form at the end of the learning opportunity. As a general rule, surveys should only be used as one mechanism for receiving input from learners. Many of the measures are observable, and your facilitators/instructors would watch for changes in behaviour throughout the duration of the learning opportunity and keep track. The risk of using only surveys or feedback forms as the primary mechanisms for data collection, is that you will end up with incomplete data if some individuals do not complete a survey. For example, if there were 10 participants in your learning opportunity but only 5 responded to your survey, you can really only report on outcomes for those 5 participants.

3. When # of Participants Varies Within a Category:

The # of Participants should not change from measure to measure within a category (e.g. Adult Literacy, ELL, Basic Computer Skills, etc.). The # of Participants is the denominator for calculating percentages for many of the measures. Only the numerators should change from measure to measure.

4. Not Tracked vs. Zeros

Organizations that are not able to track a particular outcome should indicate “Not Tracked” or “N/A”. Entering zeroes demonstrates a level of effectiveness or ineffectiveness in achieving outcomes. 2015/16 was a learning year and there were cases where organizations were not able to track some measures. Going forward, organizations need to track all measures to ensure a level of consistency province wide.

5. Heard Back:

We do not expect ‘formal’ follow-up with learners, such as organizations tracking down learners for phone/email/in-person follow-ups after a learning opportunity has ended. We would only expect this information to come from having an ongoing relationship with a learner, perhaps out of informal conversation. We created this measure to recognize that you will likely not hear back from all participants.


We are committed to seeking input from your organizations about the Logic Model, to ensure we are on track and asking the right questions. We also want to know about challenges that you faced in collecting some of the new data. In the coming months, we will be reaching out to you for more information, through surveys and/or focus groups to gather more information.

In addition to your Regional Support Staff and grant manager, the following resources are available to support you:

  • Review the CALP Measurement and Evaluation Guide, available on the CALP Portal and was emailed with the Final Report template. This document provides a description of each measure, as well as some real-life examples.
  • We also would encourage your organizations to review the CLN’s e-Learning on the CALP Portal called Outcomes-based Measurement and Evaluation. It was designed specifically around the Community Adult Learning Program and walks through the Logic Model, outcomes measures, and provides tools that have been collected from the system to help you track information.
  • The CALP Database is also a great tool for tracking and reporting CALP data and information.

Thanks for reading. If you have any comments, please share with us below!

CALP Team
Advanced Education

Comments

Sign in to view 0 comments

Related


What's New from CLN For CALP PD
Posted: 11 February 2019 Comments: 0 Recommendations: 1

CALP PD: It's all about the adult learner
Posted: 14 June 2017 Comments: 0 Recommendations: 1

Taking Time to Reflect
Posted: 23 May 2017 Comments: 0 Recommendations: 0
Found in: