TITLE, ARIAL,
SIZE 52-72
Program, project, initiative
2 lines, Garamond, size 14
Date, Garamond, size 14
Evaluation planning: A
resource for addiction and
mental health researchers
February 2019
2
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Table of contents
Project team ............................................................................................................................... 3
Logic models .............................................................................................................................. 4
Evaluation plan and framework .................................................................................................. 8
Elements of an evaluation plan .............................................................................................. 8
Developing an evaluation plan ............................................................................................... 9
Evaluation objectives .......................................................................................................... 9
Evaluation questions ........................................................................................................... 9
Evaluation outputs, outcomes, and indicators ..................................................................... 9
Developing evaluation methods .........................................................................................10
Defining roles and responsibilities and developing an evaluation timeline ..........................12
Evaluation framework ............................................................................................................13
Evaluability assessment ............................................................................................................14
What is evaluability assessment? ..........................................................................................14
How do you conduct an evaluability assessment? .................................................................14
Steps to conducting an evaluability assessment....................................................................15
Step 1: Determine the scope and purpose of the EA ..........................................................15
Step 2: Identify stakeholders and intended users of the program .......................................15
Step 3: Identify and review relevant documents .................................................................15
Step 4: Conduct stakeholder interviews and observations .................................................16
Step 5: Assess evaluability ................................................................................................16
Step 6: Draw conclusions and recommendations ...............................................................18
References ...............................................................................................................................19
3
Knowledge Exchange
Provincial Addiction & Mental Health
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Project team
Project sponsor
Neha Batra-Garga, Manager
Knowledge Exchange, Provincial Addiction and Mental Health
Prepared by
Michelle Chan, Senior Research Officer
Knowledge Exchange, Provincial Addiction and Mental Health
Contact
For more information, contact:
Knowledge Exchange, Provincial Addiction and Mental Health
Alberta Health Services
© 2021, Alberta Health Services, Provincial Addiction and Mental Health Knowledge Exchange. This material is protected by
Canadian and other international copyright laws. All rights reserved. This material may not be copied, published, distributed or
reproduced in any way in whole or in part without the express written permission of Alberta Health Services (contact Kerry Bales via
Anita Lal at anita.lal@ahs.ca). This material is intended for general information only and is provided on an "as is", "where is" basis.
Although reasonable efforts were made to confirm the accuracy of the information, Alberta Health Services does not make any
representation or warranty, express, implied or statutory, as to the accuracy, reliability, completeness, applicability or fitness for a
particular purpose of such information. This material is not a substitute for the advice of a qualified health professional. Alberta
Health Services expressly disclaims all liability for the use of these materials, and for any claims, actions, demands or suits arising
from such use.
For citation purposes, use the following format:
Alberta Health Services. (2019). Evaluation planning: A resource for addiction and mental
health researchers. Edmonton, AB: Author.
4
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Logic models
A logic model is a visual depiction of the “if-then” (causal) relationship between a
program’s inputs, activities, and what the program intends to accomplish. As a program
evolves and changes, so should the model.
If we do these activities, then
certain outputs and outcomes can
be expected.
“The purpose of a logic model is to provide stakeholders with a road map describing the
sequence of related events connecting the need for the planned program with the program’s
desired results.” (W.K. Kellogg Foundation, 2004, p. 3)
A logic model can be used for:
Planning a new program
Describing an existing program
Ensuring the activities of a program link to the expected outcomes
Identifying the outcomes that will be used to measure success
A logic model typically includes the following components (W.K. Kellogg Foundation, 2004):
Goal what the program is trying to accomplish
Target population who the program is being delivered to
Inputs the resources that go into a program (e.g., funding, staff, infrastructure).
Activities the processes, tools, events, and actions that take place as part of the
program
Outputs the tangible, direct products of program activities. They can usually be
measured or counted; for example, number of workshops, number of attendees
Outcomes the expected changes that result from program activities
o Short-term outcomes refer to changes in individuals’ knowledge or skills
o Intermediate outcomes refer to changes in individuals’ behaviours or attitudes
o Long-term outcomes (impact) refer to changes in an organization, community, or
client system
Assumptions what is needed to support continuation of the program
Inputs Activities Outputs Outcomes
5
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
A logic model may also have an accompanying narrative that describes the model in more
detail.
A logic model can be simple or complex depending on the program or service being described.
A model can be designed using a number of formats, such as a flow-chart (Figure 1) or table
(Figure 2). Regardless of the style chosen, the model should link logically from inputs to
activities to outputs to the outcomes. In its simplest form, a logic model will describe what you
do, how you do it, and the results you hope to achieve.
Figure 1. Basic logic model
Figure 2. More detailed logic model
Inputs
(what you invest)
Money for
groceries
Activities (what
you do)
Make lunch
Outputs (what is
produced)
# of children
fed
Outcomes (what
are the results)
Decrease in
hunger
Program
Goal
Inputs
Activities
Outputs
Outcomes
Short-term Intermediate Long-term
Examples:
Examples:
Examples:
(usually
quantifiable)
Learning:
Action:
Effect on
participants:
money
staff
capital
assets
operational
expenses
teaching
presentations
counselling
mentoring
treatment
materials
distributed
clients served
hours of service
sessions taught
attitudes
skills
motivation
knowledge
awareness
opinions
behaviour
practice
decisions
policies
social
economic
civic
6
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
A logic model may also include assumptions and external factors that may affect the program.
Assumptions are underlying beliefs about how and why a program will work and the people
involved in the program. For example:
Beliefs about the target population
Beliefs about how clients will respond to treatment (e.g., confidence in research and best
practice literature)
Underlying need for the program
External factors are factors that are beyond a program’s control that may affect how a program
is implemented and operated, and its outcomes. For example:
Political environment
Social conditions
Economic situation
Geographic constraints
Outside initiatives or policies
Refer to the logic model in Figure 1 as an example of how inputs, activities, outputs, and
outcomes are logically linked. The model outlines how money for groceries (input) is needed to
make lunch (activity). If you accomplish the activity of making lunch, then a certain number of
children will be fed (output), which results in a decrease in hunger (outcome).
Some of the underlying assumptions upon which the activity of making lunch is expected to lead
to a decrease in hunger are:
You can find/get groceries.
You have the necessary equipment to make lunch.
Some external factors that may affect this relationship include:
The price of groceries
A food shortage/food rationing
7
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
8
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Evaluation plan and framework
An evaluation plan describes the overall approach or strategy that will guide the
evaluation. An evaluation can be undertaken on any group of activities intended to
achieve specific outcomes, such as a program, project, initiative, intervention, policy, or
strategy. This document focuses on program evaluation but also applies to the
evaluation of any of these activities.
Typically a logic model is developed first and then an evaluation plan. An evaluation
plan answers the following questions (Lavinghouze & Snyder, 2013):
Why is the evaluation being conducted?
What will be done?
Who will do it?
When will it be done?
How will evaluation findings likely be used?
Elements of an evaluation plan
An evaluation plan often includes the following sections.
Section
Description
Program background
Describes the program that is being evaluated and its
goals and objectives.
Goals and objectives
Describes the purpose of the evaluation. It specifies the
goals, objectives, and the intended
audience/stakeholders.
Scope
Describes what will be undertaken as part of the
evaluation, and what is considered to be out-of-scope.
Evaluation questions,
outcomes, and
indicators
Evaluation questions, outcomes, and indicators are often
included in an evaluation framework.
Methods
Describes how evaluation data will be collected.
Budget
Describes what financial resources are available for the
evaluation.
Reporting
Describes the strategy for sharing results and developing
recommendations.
Roles and
responsibilities
Describes who will be involved in the evaluation and
how.
Timeline
Outlines a schedule for developing the evaluation plan,
data collection, data analysis, and reporting of results.
9
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Developing an evaluation plan
An overview of some important elements of an evaluation plan is provided below.
Evaluation objectives
Evaluation objectives are what the evaluation will achieve and are directly linked to the
goals of the program being evaluated.
Evaluation questions
Evaluation questions are linked to evaluation objectives, specific program outcomes,
and measures (indicators).
Evaluation questions identify important issues for program decisions. These questions
may focus on:
Planning and implementation issues. For example: How well was the program
planned? How well was that plan put into practice?
Reaching program objectives. For example: How well has the program met its
stated objectives?
The impact of the program on participants and/or the community. For example:
What difference has the program made to its intended targets or the community
as a whole? (Center for Community Health and Development, 2018)
Evaluation outputs, outcomes, and indicators
The evaluator then needs to determine how these questions will be measured. To do
this, they will identify key outputs and outcomes for each question, and develop
indicators for them.
Outputs are the direct products that result from program activities. They can usually be
measured or counted. For example, number of workshops, number of attendees.
Outcomes are the expected changes in attitudes, behaviours, knowledge, skills, status,
or level of functioning as a result of program activities.
Indicators are specific measures that demonstrate whether goals or objectives have
been achieved. They can be proxies for goals and objectives that are not directly
observable or measurable (e.g., using unemployment rate as a proxy indicator of a
country’s economic state). Indicators answer the question “How will I know when the
outcome happens?”
Examples of indicators of program outputs:
10
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Number of clients seen
Participation rate
Levels of client satisfaction
Amount of intervention exposure
Examples of indicators of program outcomes:
Changes in participant behaviour
Changes in community norms, policies or practices
Changes in participant health status and/or quality of life
Changes in settings or environment around the program
When developing evaluation indicators, the following criteria should be met:
Indicators must be relevant to outputs and outcomes that have been identified.
Indicators must abide by ethical standards for research and evaluation (for
example, the Tri-Council Guidelines).
Indicators must be valid. In other words, does the indicator measure what it is
supposed to?
Indicators must be reliable. In other words, does the indicator produce consistent
results?
Tips for developing indicators:
Multiple indicators are often needed to track the implementation and effects of a
program.
A program logic model can be a useful guide for developing evaluation
indicators.
Information about evaluation questions, outcomes, and indicators is often
captured in an evaluation framework (Centers for Disease Control and
Prevention, 1999).
Developing evaluation methods
An evaluation plan should describe how data will be collected. This section of the
evaluation plan may include:
A description of participants
Sampling techniques
Participant recruitment strategy
Consent processes and how ethical concerns such as confidentiality will be
addressed
Data collection methods (such as interviews, focus groups, surveys)
11
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Whether collected data will be qualitative, quantitative, or both
A description of any available baseline measures
A description of how data will be stored and how long it will be retained
Depending on the type of data you plan to use, you may need to provide additional
information. For example:
If data will extracted from an existing database, include a description of the
database.
If document reviews are used, include an overview of this process.
If the evaluation will include third party data, include a description of how data
will be obtained, as well as a description of any information sharing agreements.
Common evaluation data sources
Existing information
Program documents (such as logs, meeting minutes, annual reports, proposals,
and project and grant records)
Existing administrative databases (such as Statistics Canada)
Media records
Public service and business records (such as social and health agency data or
student performance records)
Other evaluations of the same or similar programs
People
Program participants
Program staff, administrators, and volunteers
Stakeholders (such as community members and community leaders)
Key informants
Collaborators
Funders
Observations
Observations of program events and activities (such as recording the
characteristics, interaction patterns, and skill development of program
participants)
Observations of practices (e.g., health care practices)
Observations of verbal and non-verbal behaviours (such as people working
together as a team) (Taylor-Powell, 2002)
12
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Common data collection methods
Surveys
Interviews
Checklists
Tests
Summaries of records/documents
Focus groups
Extractions from administrative datasets (Posavac, 2016)
Defining roles and responsibilities and developing an evaluation timeline
An evaluation plan needs to provide a description of who will do what, when, and
how? This includes who will:
Carry out literature reviews and/or collect background information
Develop tools, instruments, and consent procedures
Collect, enter, and analyze data
Write-up and share results
Each of these tasks needs to be considered in the overall timeline of the evaluation.
When setting up the timeline, consider:
When the evaluation needs to begin and end
An evaluation framework (useful in providing an overview of tasks that need to be
completed)
Due dates for feedback and reports
13
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Evaluation framework
An evaluation framework is a tool used to organize and link evaluation questions,
outcomes or outputs, indicators, data sources, and data collection methods. The
evaluation framework below describes an evaluation of a collaborative care model
introduced to improve care for mental health patients.
14
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Evaluability assessment
What is evaluability assessment?
Evaluability assessment (EA) examines the extent to which a program
1
can be evaluated in a
reliable and credible fashion. The results of an EA helps identify whether the evaluation of a
program is justified, feasible, and likely to provide useful information (Kaufman-Levy & Poulin,
2003).
How do you conduct an evaluability assessment?
An EA is conducted through document reviews, interviews with stakeholders, and observations
(such as site visits).
Documents should be reviewed for:
The program’s history, goals/objectives, scope, design, and operation
Whether the program has a theory of change that highlights how specific activities will
lead to expected outcomes
Whether there are sufficient resources to do an evaluation
Timelines for the evaluation (Peersman et al., 2015; Trevisan & Huang, 2003; USAID,
2017; Davies, 2003; Leviton et al., 2010; Kaufman-Levy & Poulin, 2003)
Interviews with stakeholders allow the EA team to:
Get clarification about the program and any assumptions that were made in its design
(Trevisan & Huang, 2003; Public Health Ontario et al., 2018).
Confirm the purpose of the evaluation and how the results of the evaluation will be used
(Public Health Ontario et al., 2018).
Observations, such as site visits, allow the EA team to witness the program in action. This
allows the EA team to compare what was said about the program with how the program
operates (Kaufman-Levy & Poulin, 2003; Public Health Ontario et al., 2018).
The EA should be led by independent, third-party evaluators. To minimize bias and conflict of
interest, the EA team should not involve project managers or the team that will conduct the
subsequent evaluation (Davies, 2003). It is best if more than one evaluator conducts the EA to
provide different perspectives and findings (Peersman et al., 2015).
1
The term "program" is used broadly to refer to any group of activities intended to achieve specific outcomes; this includes projects,
initiatives, interventions, policies, or strategies.
15
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Steps to conducting an evaluability assessment
Step 1: Determine the scope and purpose of the EA
Will the entire program be assessed, or only specific components? (Pearlman et al.,
2015; Leviton et al., 2010; USAID, 2017).
What timelines and resources are necessary to carry out the evaluation? (Public Health
Ontario et al., 2018).
Step 2: Identify stakeholders and intended users of the program
The second step is to identify key stakeholders and understand their needs (Public Health
Ontario et al., 2018; USAID, 2017; Trevisan & Huang, 2003). Different stakeholders should be
included in the EA process, including program managers who may implement the EA findings,
and decision makers who usually approve funding and resources.
Once key stakeholders have been identified, you will need to determine their needs and
expectations related to the EA, as well as their roles and level of involvement in the EA process.
It is important to have a good understanding of stakeholder communication preferences (for
example, how would they like to be contacted and when). Involving stakeholders early in the EA
process will lead to better engagement and buy-in (Trevisan & Huang, 2003).
Step 3: Identify and review relevant documents
Relevant documents should be reviewed to understand:
The program’s history, goals, and objectives
How the program fits within the larger organization’s goals
The program’s theory of change or logic model
Determine the scope and purpose of the
evaluability assessment
Step 1
Identify stakeholders and intended users of the
program
Step 2
Identify and review relevant documentsStep 3
Conduct stakeholder interviews and observationsStep 4
Assess evaluabilityStep 5
Draw conclusions and recommendationsStep 6
16
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
The program’s capacity to collect and manage data, indicators and outcomes (both
short-term and long-term)
The target population
Program resources (USAID, 2017; Kaufamn-Levy & Poulin, 2003; Leviton et al., 2010).
You may find this information in a variety of documents, including:
Performance reports
Policy briefs
Work plans
Terms of references
Proposals
Monitoring and evaluation plans
Previous evaluation reports (USAID, 2017).
Step 4: Conduct stakeholder interviews and observations
Interviews with stakeholders provide additional insight into how the program works (Trevisan &
Huang, 2003). Stakeholders can have different intentions, expectations, and assumptions about
the program depending on their role and level of involvement (Public Health Ontario et al.,
2018).
The EA team should also conduct site visits to see how the program operates, and compare this
to what they learned from the interviews (Kaufamn-Levy & Poulin, 2003).
Step 5: Assess evaluability
Assessing evaluability involves answering important questions about:
A program's design
The overall feasibility of conducting an evaluation
Whether and how an evaluation would be useful to program managers and
other stakeholders (Peersman et al., 2015; Trevisan & Huang, 2003; USAID, 2017;
Davies, 2003; Leviton et al., 2010; Kaufman-Levy & Poulin, 2003).
The rigor and number of questions to consider when conducting an EA can vary. The
Department for International Development (DFID) developed an EA checklist based on a
synthesis of the literature. This checklist is outlined in Table 1 and can be adapted according to
your specific needs and contexts. Rationale for each question can be found in the DFID report
(Davies, 2003).
17
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Table 1. Questions to Consider When Conducting an Evaluability Assessment*
Design
Objectives
- Are the objectives of the program clearly stated, realistic, and
achievable?
Theory of
change
- Does the program have a clear theory of change? (Is there a clear
causal linkage from inputs to outcomes?)
- Can the objectives be achieved within the expected timeframe given
the planned activities?
- Are assumptions about enablers and constrainers explicit? Is it feasible
to assess these?
Monitoring and
evaluation
- Are the indicators specific, measurable, achievable, relevant, and time-
bound (SMART)?
- Does the program’s in-house monitoring and evaluation system have
the capacity to produce relevant and good quality data?
Feasibility
Documents
- Are all relevant documents available and accessible?
- Are there any previous evaluation reports?
Data
- Does baseline data exist? If not, how feasible is it to collect the data?
- Are there existing data sources to measure outcomes?
- Are critical data available?
- Has data been collected for all indicators with sufficient frequency?
- Are there significant missing data?
- Are the measures reliable?
- Does the program have the capacity to collect data for an evaluation?
Resources
- Are there sufficient resources (time, human, funding) for an evaluation?
Utility
Purpose
- What is the main purpose of the evaluation?
- Has it been discussed and agreed upon by stakeholders?
Demand and
stakeholder
buy-in
- Who is requesting the evaluation?
- Are they willing to be part of the evaluation process?
- Are they supportive of the evaluation?
Timing
- Will the evaluation inform decisions to improve the program in a timely
manner?
- Has the program been implemented long enough to show results or
outcomes?
Ethical issues
- What ethical issues exist for participants and stakeholders?
- What constraints do they impose for the evaluation?
- Are ethical guidelines in place?
*adapted from Davies, 2013.
18
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Step 6: Draw conclusions and recommendations
After an EA is conducted, you will need to communicate the findings and recommendations to
relevant stakeholders. These will usually fall under three categories:
1) No major gaps were identified: The EA finds that the program meets the majority or
all of the items in Table 1. In these cases, it is usually recommended to proceed with the
evaluation (Peersman et al., 2015).
2) Some gaps were identified: It is usually recommended that the necessary changes
are made before proceeding with the evaluation. For example, if there not enough time
and resources to answer all of the evaluation questions, stakeholders should reprioritize
the scope of the evaluation (Peersman et al., 2015).
3) Major gaps were identified: For example, the program does not have the capacity to
provide data for an evaluation, there is a lack of required resources and buy-in from
stakeholders, or there are major ethical issues and risk to participants. If these issues
cannot be addressed easily or in a timely manner, it is usually recommended that the
program postpone the evaluation (Peersman et al., 2015; Leviton et al., 2010; Kaufman-
Levy & Poulin, 2003).
19
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
References
Center for Community Health and Development. (2018). Chapter 36, Section 5:
Developing an evaluation plan. Lawrence, KS: University of Kansas. Retrieved
from the Community Tool Box:
http://ctb.ku.edu/en/tablecontents/sub_section_main_1352.aspx
Centers for Disease Control and Prevention. (1999). Framework for program evaluation
in public health. Retrieved from
http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm
Davies, R. (2013). Planning evaluability assessments. A synthesis of the literature.
Working Paper 40. Cambridge, UK: Department for International Development.
Retrieved from
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/atta
chment_data/file/248656/wp40-planning-eval-assessments.pdf
Kaufman-Levy, D. & Poulin, M. (2003). Evaluability assessment: Examining the
readiness of a program for evaluation. Justice Research and Statistics
Association, Juvenile Justice Evaluation Center. Retrieved from
http://www.jrsa.org/pubs/juv-justice/evaluability-assessment.pdf
Lavinghouze, S. R. & Snyder, K. (2013). Developing your evaluation plans: A critical
component of public health program infrastructure. American Journal of Health
Education, 44(4), 237-243.
Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability
assessment to improve public health policies, programs, and practices. Annual
Review of Public Health, 31(1), 213-233.
Ontario Agency for Health Protection and Promotion (Public Health Ontario et al.),
Meserve A., & Mensah G. (2018). Focus on: Evaluability assessmenta step
model. Toronto, ON: Queen’s Printer for Ontario. Retrieved from
https://www.publichealthontario.ca/-/media/documents/focus-on-evaluability-
assessment.pdf?la=en
Organisation for Economic Co-operation and Development Development Assistance
Committee (OECD-DAC) (2009). Guidelines for project and programme
evaluations. Retrieved from
http://www.oecd.org/development/evaluation/dcdndep/47069197.pdf
20
Alberta Health Services
Evaluation Planning
Last revised: December 07, 2021
Peersman, G., Guijt, I., & Pasanen, T. (2015). Evaluability assessment for impact
evaluation: Guidance, checklists and decision supports. A Methods Lab
publication. London, UK: Overseas Development Institute. Retrieved from
https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-
files/9802.pdf
Posavac, E. J. (2016). Program evaluation: Methods and case studies (8
th
ed.). New
York, NY: Routledge.
Taylor-Powell, E. (2002). Sources of evaluation information, quick tips #11. Madison,
WI: University of Wisconsin-Extension, Program Development and Evaluation.
Retrieved from
https://fyi.extension.wisc.edu/programdevelopment/files/2016/04/Tipsheet11.pdf
United States Agency International Development (USAID). (2017). Evaluation
resources. Conducting an evaluability assessment for USAID evaluations.
Retrieved from https://usaidlearninglab.org/library/conducting-evaluability-
assessment-usaid-evaluations.
W.K. Kellogg Foundation. (2004). Logic model development guide. Retrieved from
https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-
logic-model-development-guide