Performance Practice Introduction: Programs and Strategies

Introduction: Programs and Strategies

Performance Practice Learning Module

How well-designed and well-implemented are your programs?

Clear definitions of your target audience/population and activities/services are important elements of well-designed programs and strategies. It’s not enough to design a great program; high-quality implementation is equally important.

This learning module is one of seven Performance Practice modules that help you and your team reflect on how your organization’s behaviors and practices align with the principles of high performance. Review the questions below—and share them with your colleagues—for an introduction to the principles and practices of well-designed and well-implemented programs and strategies.

What can you learn about your programs and strategies by discussing these topics?

Are your organization’s leaders and managers clear on the target population or audience you serve and passionate about serving them?

  • Has your organization defined your target population (clients at the core of your mission with whom you work to achieve measurable outcomes) and/or our target audience (groups you need to influence if you are to create your intended knowledge, attitude, behavior, or policy change)? Has this definition been communicated and accepted by all staff?
  • Does your organization collect data aligned with the criteria for your target population or audience and use those data as the basis for determining whom you will serve or seek to influence?
  • Does your organization hire selectively for those who have a deep-rooted understanding of and connection with the people and causes you serve? Do you cultivate this understanding and connection through ongoing staff-development experiences?

Do your organization’s leaders and managers base the design of your programs and strategies on a sound analysis of the issues, insights from intended beneficiaries, and evidence-informed assumptions about how the organization’s activities can lead to the desired change (often referred to as a “theory of change”)?

  • Has your organization assembled and regularly reviewed the best available evidence as part of key programs and strategies development? (For service organizations, the continuum of evidence usually consists of the following, from weakest to strongest: 1) knowledge that credible practitioners have accumulated over time; 2) knowledge that has been developed by social researchers studying a similar target population; 3) research borrowed from other, similar programs that have benefited from a rigorous impact evaluation; and 4) research on the organization’s program(s) validated through the use of rigorous impact evaluations.)
  • Does your organization actively seek feedback from members of your target population or target audience—those closest to the problems you’re addressing—and use this information to help you design and improve your programs and strategies?
  • Does your organization have a theory of change that includes a target population/audience, a detailed service/program model, and outcomes with indicators?
  • Is your organization’s theory of change:
    • plausible (makes sense to the informed reviewer)?
    • doable (can be executed with available resources)?
    • measurable (key elements can be monitored using qualitative and quantitative data)?
    • testable (program model or advocacy strategies are codified in ways that allow for internal monitoring and external evaluation)?
    • socially significant (success would have high value for our target population or cause)?
  • Has your organization integrated your theory of change into your operational DNA? (Everyone understands it, can articulate it, and knows how to contribute to its execution.)

Do your leaders and managers design programs with careful attention to the larger ecosystem in which your organization operates, including racial, cultural, geographic, historical, and political dynamics?

  • Does your organization invest time and other resources to study the complex local dynamics that affect your ability to achieve measurable outcomes for your target population or influence your target audiences? (Depending on the type of organization, this could include identifying key influencers/power centers in a community, studying the historical roots underlying present-day attitudes, or mapping relevant programs or efforts engaging the same population or audience.)
  • Does your organization intentionally and routinely work to build strong relationships with other organizations and influencers in your community whose actions and decisions affect your target population or audience?
  • Does your organization staff its programs with people who, based on their professional and life experiences, are skilled in navigating local dynamics and building relationships with relevant partners?

Do your leaders and managers implement programs in a consistently high-quality manner? Is collecting and using data part and parcel of implementing high-quality programs?

  • Do your organization’s program teams implement your services based on codified program models that address:
    • theoretical principles?
    • intended outputs and outcomes?
    • phasing, dosage, and duration of activities?
    • professional requirements for staff?
  • Does your organization hold an individual or team accountable for monitoring whether your programs are implemented with fidelity (in accordance with specified program design)?

Do your leaders and managers do a good job of recruiting, retaining, motivating, listening to, and learning from participants and intended beneficiaries?

  • Is your organization relentless about recruiting and enrolling people in your target population, helping them stay engaged until they achieve the intended outcomes, and learning why some drop out despite your best efforts to retain them?

In the case of direct-service organizations, do your leaders and managers invest in building strong relationships between staff and participants? These relationships may be the single biggest determinant of whether participants will stay engaged in programming and thereby achieve the desired results.

  • Does your organization systematically use data on staff-participant relationships to inform staff recruitment, training, coaching, and development—as well as drive program improvement?

Do your leaders and managers guard against the temptation to veer off course in search of numbers that look good in marketing materials or reports to funders?

  • Has your organization put checks and balances in place to ensure that the organization does not engage in corner-cutting measures (e.g., cherry-picking participants, biasing data) in pursuit of rosy results?
  • Has your organization put checks and balances in place to protect against “mission creep”—chasing funding opportunities by tacking on new programs that stretch beyond your core purpose?
Download Printer-Friendly Version of This Introduction

Need Some Inspiration? Watch These Videos