Program Evaluaon
Key Terms
and Concepts
GAO-21-404SP
Review
Conduct
Engage
Perform
Develop
Create
Strengthen
Identify
Ensure
Inform
Establish
March 2021
Both the execuve branch and congressional commiees need evaluave informaon to help them
make decisions about the programs they oversee–informaon that tells them whether and why a
program is working well or not. The Government Performance and Results Act of 1993 (GPRA) and
GPRA Modernizaon Act of 2010 (GPRAMA) established a framework for performance management
and accountability within the federal government. Building on that foundaon, Congress has since
passed, among other laws, the Foundaons for Evidence-Based Policymaking Act of 2018 (Evidence
Act) to strengthen the evidence-building eorts of execuve branch agencies. The Evidence Act,
for example, created a framework for a more comprehensive and integrated approach to federal
evidence-building eorts.
This product updates our previous glossary (GAO-11-646SP) to highlight dierent types of
evaluaons for answering quesons about program performance, as well as relevant issues to
ensure study quality. As agencies idenfy the key quesons they will address in their
Evidence-Building Plans (Learning Agendas) and Annual Evaluaon Plans, they may consult
guidance provided by the Oce of Management and Budget (OMB). This glossary can help
agency ocials beer understand fundamental concepts related to evaluaon and enhance
their evidence-building capacity.
To develop this glossary, we examined relevant informaon from execuve and legislave branch
agencies and consulted with knowledgeable stakeholders and GAO internal experts. We conducted a
systemac review of terminology from relevant documents including GAO reports, relevant statutes,
OMB guidance, and publicaons from the American Evaluaon Associaon. We also reviewed
terminology with authors of established evaluaon literature.
Major contributors were Terell P. Lasane, David Blanding, Valerie J. Caracelli, Eleanor Thompson,
Benjamin T. Licht, Jehan Chase, Pille Anvelt, and Dani Greene. Please address any quesons to
Terell P. Lasane, Assistant Director for the Center for Evaluaon Methods and Issues (CEMI) in the
Applied Research and Methods Team (ARM) at (202) 512-5456 or [email protected].
Lawrance L. Evans, Jr.
Managing Director, Applied Research and Methods
Program Evaluaon: Key Terms and Concepts
1
2
Statutes
Execuve branch acons
GPRA Modernizaon Act of 2010
Expands and enhances the federal government’s
framework for generang performance informaon
established by the Government Performance and
Results Act (GPRA) (Public Law 111-352)
Encourages agencies to conduct rigorous program
evaluaons, build evidence of effecve approaches,
and assess the adequacy of evidence supporng
budgetary priories (Memorandum M-10-01)
Evidence-Based Policymaking
Commission Act of 2016
Established the Commission on Evidence-Based
Policymaking to study and make recommendaons for
st
rengthening the federal government’s evidence-building
and policymaking efforts (Public Law 114-140)
Foreign Aid Transparency and
Accountability Act of 2016 (FATAA)
Requires agencies administering foreign assistance
to follow certain monitoring, evaluaon, and
reporng requirements (Public Law 114-191)
Foundaons for Evidence-Based
Policymaking Act of 2018 (Evidence Act)
Requires agencies to enhance evidence-building
capacies, make data more accessible, and
strengthen privacy protecons (Public Law 115-435)
Commission on Evidence-Based Policymaking
Establishes guidelines for monitoring and
evaluang foreign assistance per FATAA
(Memorandum M-18-04)
OMB guidance
Office of Management and Budget
(OMB) guidance
2009
October
2011
January
Directs agencies to conduct annual strategic reviews,
assessing porolios of evidence to support various
decision-making processes (Circular No. A-11)
OMB guidance
2012
August
Encourages agencies to strengthen programs
using evidence and innovaon strategies
(Memorandum M-13-17)
OMB guidance
2013
July
2016
March
2016
July
2017
September
2019
January
2018
January
Establishes expectaons for how and when agencies
are to implement Evidence Act requirements
(Circular No. A-11 and Memorandum M-19-23)
OMB guidance
2019
June/July
OMB guidance
Idenfies federal program evaluaon standards and
pracces as part of Evidence Act implementaon
(Memorandum M-20-12)
2020
March
2021
January
Reaffirms and builds on prior memoranda
that require agencies to incorporate
scienfic integrity principles in data
governance and evaluaon approaches
(Presidenal Memorandum of Jan. 27, 2021)
Memorandum on Restoring Trust in
Government Through Scienfic Integrity
and Evidence-Based Policymaking
Aims to improve program and project management
pracces within the federal government,
among other things (Public Law 114-264)
Program Management
Improvement Accountability Act (PMIAA)
20
17
September
2016
December
Outlines three key strategies, as part of a 5-year
strategic plan for implemenng the PMIAA, which
focus on clarifying roles and responsibilies,
idenfying principles-based standards, holding
managers accountable for results, and building a
capable program management workforce
(Memorandum M-18-19)
OMB guidance
2018
June
Issued its final report and recommendaons for
improving the federal government’s evidence-
building acvies and capabilies (The Promise of
Evidence-Based Policy Making)a
a
Relevant statutes and guidance issued since 2009 encourage federal
agencies to use multiple sources of evidence in program management
Source: GAO analysis of select laws and execuve branch materials.
a
In accordance with the Evidence-Based Policymaking Commission Act of 2016, the
Commission was comprised of academics and experts appointed by the President
(including an OMB representave) and congressional leadership.
Some sources of
evidence used to support
decision-making,
program improvement,
and continuous learning
Decision
Program
evaluaon
Stascal
analysis
Administrave
records
Policy
analysis
Performance
measurement
3
Different sources of evidence hold distinct value. For example, program
evaluation and performance measurement are key tools for federal program
management but differ in the following ways:
Program
evaluaon
Theory of
program change
Discrete
Quantave or
qualitave
Ongoing
Typically use
quantave data
Agency goals
Performance
measurement
What frequency What data it usesWhat drives it
Program evaluation
and performance
measurement are
distinct but
complementary
Agencies should consider different sources of evidence
Administrave records - A source of evidence consisng of qualitave or quantave data collected or produced as
part of a program’s operaon.
Policy analysis - A source of evidence consisng of a systemac process of idenfying and comparing potenal opons for
addressing a policy problem based on certain criteria, and choosing the opon that best meets the criteria.
Program evaluaon - An assessment using systemac data collecon and analysis of one or more programs, policies, and
organizaons intended to assess their eecveness and eciency.
Performance measurement - The ongoing monitoring and reporng of a program’s accomplishments and progress,
parcularly towards its pre-established goals.
Stascal analysis - A form of evidence that uses quantave measurements, calculaons, models, classicaons, and/
or probability sampling methods to describe, esmate, or predict one or more condions, outcomes, or variables, or the
relaonships between them.
Whether a program is
working and why
How well a program is
performing
What it can tell
Program evaluation is key to program learning, program
improvement, and statutory compliance
Economy - The extent to which a program or intervenon is operang at minimal cost, as determined by a
program evaluaon.
Eecveness - The extent to which a program or intervenon is achieving its intended goals, as determined by a
program evaluaon.
Eciency - The rao of monetary and/or nonmonetary program inputs (such as costs or hours worked by employees)
to outputs (amount of products or services delivered) or outcomes (the desired results of a program).
Equity - The consistent, systemac, fair, just, and imparal treatment of all individuals, including individuals who
belong to underserved communies that have been denied such treatment.
4
Some reasons to conduct or use program evaluation
Answer quesons about
the extent to which
a program, process,
or acvity is being
implemented as intended
Build a culture of
connuous learning
to foster program
improvement
Test a theory of
program change
Inform resource
allocaon
Idenfy a program’s
outcome(s) or impact(s)
Determine the economy,
eecveness, eciency,
and equity of program
operaons
Strengthen program
management
Ensure
accountability
Create
condions
for quality
evaluaons
1
Conduct
data reviews
Perform
meta-evaluaons
Develop an
evidence-
building plan
(learning agenda)
Create an
evaluaon
plan
Review
performance
measures
Engage
stakeholders
Establish
theory of
program change
(e.g.,
logic model)
Conduct an
evaluability
assessment
Conduct a
capacity
assessment
How agencies can maximize the value of program evaluation
Data review - A systemac process for exploring whether data may be used for a program evaluaon by assessing the
data’s quality (e.g., accuracy, reliability, validity) as well as related limitaons, eorts to address limitaons, and
procedures for safeguarding the data against misuse and breaches of security.
Meta-evaluaon - A systemac assessment of the quality of one or more program evaluaons using criteria such as
transparency, independence, objecvity, ethics, relevance, ulity, and rigor.
Logic model - A diagram that documents a program’s theory of change, including expected inputs, acvies,
outputs, and outcomes.
Stakeholder - Any person, group, or organizaon interested in or knowledgeable about a program that is being evaluated
and may aect or be aected by the results of an evaluaon.
Evaluability assessment - A pre-evaluaon examinaon of the extent to which a program can be evaluated in a reliable and
credible fashion or to which an evaluaon is worthwhile based on the evaluaon’s likely benets, costs, and outcomes.
Evidence-building plan - A systemac plan (also known as a learning agenda) for idenfying and addressing policy quesons
relevant to the programs, policies, and regulaons of the agency. The plan–a component of the agencys strategic plan and developed
in consultaon with relevant stakeholders–is to include, among other things, the data, methods, and analyc
approaches that the agency may use to develop evidence and any challenges faced in obtaining evidence to support policymaking.
Evaluaon plan - An annual agency-wide plan that is to describe, among other things, (1) the key quesons for each
signicant evaluaon the agency intends to begin in the next scal year and (2) the key informaon collecons or acquisions
the agency plans to begin during the year covered by the plan.
Capacity assessment - An assessment agencies are to include in their strategic plans of the coverage, quality, methods,
eecveness, and independence of the stascs, evaluaon, research, and analysis eorts of the agency.
5
SUMMATIVE
Outcome evaluaon
Impact evaluaon
Cost-benet analysis
Cost-eecveness analysis
Select the
appropriate
techniques,
methods,
and tools
2
FORMATIVE
Process evaluaon
Purposes of evaluaon Techniques, methods, and tools
Formave - An evaluaon that is conducted when researchers want to examine the extent to which a program is being
implemented as intended, producing expected outputs, or could be improved.
Needs assessment - An evaluaon, oen used for formave purposes, designed to understand the resources
required for a program to achieve its goals.
Process evaluaon - Oen used for formave purposes, an evaluaon that assesses the extent to which essenal
program elements are in place and conform to statutory and regulatory requirements, program design, professional
standards, or customer expectaons.
Summave - An evaluaon that is conducted when researchers want to determine the extent to which a program has
achieved certain goals, outcomes, or impacts.
Cost-benet analysis - A method of idenfying and comparing relevant quantave and qualitave costs and
benets associated with a program or acvity, usually expressed in monetary terms.
Cost-eecveness analysis - A method of idenfying the cost of achieving a single goal, nonmonetary outcome, or
objecve, which can be used to idenfy the least costly alternaves for meeng that goal.
Impact evaluaon - Oen used for summave purposes, a type of evaluaon that focuses on assessing the impact of
a program or aspect of a program on outcomes by esmang what would have happened in the absence of the
program or aspect of the program.
Outcome evaluaon - Oen used for summave purposes, a type of evaluaon that assesses the extent to
which a program has achieved certain objecves, and how the program achieved these objecves.
Build a culture
of connuous
learning
3
Evaluate the quality of evidence periodically
Data reviews
Capacity
assessment
Meta-evaluaon
Review
performance
measures
Transparency - This is achieved when all phases of the evaluaon are available for review and crique by interested pares.
Ethics - This is achieved when the evaluaon safeguards such things as the dignity, rights, safety, and privacy of evaluaon
parcipants and stakeholders.
Independence - This is achieved when the conduct and use of an evaluaon are free from the undue control, inuence, and
bias of stakeholders.
Relevance - This is achieved when the evaluaon addresses the most crical quesons idened by key stakeholders and
can be leveraged for decision-making, program improvement, and program learning.
Rigor - This is achieved when the data collecon, analycal methods, and interpretaons employed are valid, reliable, and
appropriate given the research queson(s).
Commit to certain quality principles
Transparency
Ethics
Rigor
Independence
Relevance
Objecvity
Ulity
6
GAO-21-404SP
This is a work of the U.S. government and is not subject to copyright protecon in the United States. The published product may be reproduced and distributed in its enrety without
further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to
reproduce this material separately.
The Government Accountability Office, the audit, evaluation, and
investigative arm of Congress, exists to support Congress in meeting
its constitutional responsibilities and to help improve the performance
and accountability of the federal government for the American people.
GAO examines the use of public funds; evaluates federal programs and
policies; and provides analyses, recommendations, and other assistance
to help Congress make informed oversight, policy, and funding decisions.
GAO’s commitment to good government is reflected in its core values of
accountability, integrity, and reliability.
GAO’s mission
The fastest and easiest way to obtain copies of GAO documents at no
cost is through our website. Each weekday afternoon, GAO posts on its
website newly released reports, testimony, and correspondence.
You can also subscribe to GAO’s email updates to receive notification of
newly posted products.
Obtaining copies of GAO
reports and tesmony
The price of each GAO publication reflects GAO’s actual cost of
production and distribution and depends on the number of pages in the
publication and whether the publication is printed in color or black and
white. Pricing and ordering information is posted on GAO’s website,
https://www.gao.gov/ordering.htm.
Place orders by calling (202) 512-6000, toll free (866) 801-7077, or
TDD (202) 512-2537.
Orders may be paid for using American Express, Discover Card,
MasterCard, Visa, check, or money order. Call for additional information.
Order by phone
Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to
our RSS Feeds or Email Updates. Listen to our Podcasts. Visit GAO on the web
at www.gao.gov.
Connect with GAO
To report fraud, waste, and
abuse in federal programs
Contact FraudNet website: https://www.gao.gov/about/what-gao-does/
fraudnet. Automated answering system: (800) 424-5454 or (202) 512-7700
Orice Williams Brown, Managing Director, WilliamsO@gao.gov,
(202) 512-4400, U.S. Government Accountability Office, 441 G Street NW,
Room 7125, Washington, DC 20548
Congressional Relaons
Chuck Young, Managing Director, youngc1@gao.gov,
(202) 512-4800 U.S. Government Accountability Office, 441 G Street NW,
Room 7149 Washington, DC 20548
Public Aairs
Stephen J. Sanford, Acting Managing Director, spel@gao.gov,
(202) 512-4707 U.S. Government Accountability Office, 441 G Street NW,
Room 7814, Washington, DC 20548
Strategic Planning and
External Liaison
7