There’s Simply Too Much Bathwater

Samuel Gerstin
7 min readMar 6, 2020

While the title sinks in, I’ll be less cryptic and state I am referring to the ubiquitous founding document of any programmatic MEL system: the MEL Plan.

We all recognize the MEL Plan’s function in guiding the program team and partners to plan for and expend resources gathering knowledge and data that assess the influence and outcomes of their delivery model.

Generally the MEL Plan comprises several sections:

  • Purpose of the Plan, including timeline for update and revision
  • The program’s causal pathway (i.e., results/outcomes chain and theory of change, including underlying assumptions), and how this pathway aligns with organizational as well as donor objectives
  • Means to validate the causal pathway: 1) through indicators (quantitative and qualitative data, including processes, tools and reporting schedule); 2) through other forms of knowledge and data (e.g., evaluative activities)
  • Roles and responsibilities for executing all components of the Plan

While MEL Plans are useful documents, particularly as reference material during times of staff transition, they can have unintended and ultimately undesirable consequences.

This post will reflect on two common and interwoven realities behind many MEL Plans:

  1. Lack of intentioned validation by the program (implementing) team
  2. Not reflective of what might be meaningful to the program team

1. Lack of Validation

Most programs intend to “revise the MEL Plan at start-up,” acknowledging a likely reprioritization of activities and stakeholders from what was evoked through the draft Plan submitted at proposal. Yet this revision often does not occur.

Remember: the majority of program staff, and certainly MEL staff, were likely brought aboard during start-up, and thus have no familiarity with the development of the MEL Plan. And what awaits them upon arrival? A signed and sealed Plan, passed on by headquarters, full of requisites and deliverables. The results/outcomes chain, the proposed data, the MEL staffing structure and reporting schedule are laid out. The donor has preliminarily approved the Plan. The budget for hiring a MEL Manager and/or Officers has been obligated (and recruitment should have begun last week, as the baseline is waiting). More significantly, the MEL Plan carries weight, is perhaps even taken as canon. It is understood to be a highly technical document, grounded in existing theory and evidence, generated by the organization’s in-house team of MEL specialists, etc. (all of which is mere presumption; many MEL Plans are drafted under less ‘glamorous’ conditions).*

The point is, regardless of how the MEL Plan came about, once handed over to the program team they are unlikely to feel much beyond the fact it is lying in front of them, with articulated expectations. What team would challenge the wisdom of the organization which just hired them?**

Nor do the coming days, months and even years provide room to reflect. Moments which might otherwise afford the time and space for deep validation of the causal pathway are overwhelmed by the need to gather and report on data.

I have visited multiple program teams, across different sectors and with ranging capacity and resources, and all evoke some degree of angst reporting to the MEL Plan’s data stipulations. This feeling is most acute among MEL staff — or if there are no MEL staff, whomever is responsible for the MEL Plan — although senior staff may convey the same (perhaps by extension of the donor). There is a lot of data. The data is frequently coordinated alongside a third party. There is not much provided in the line of tools and templates. There may be more than one reporting system.

Yet, as most everybody agrees, the need for data is paramount. Reporting takes precedence over validation.

But I like to dig deeper. “Does your donor ever ask about your data?” “Did your partners ever receive reporting templates?” “What is the actual definition of such-and-such indicator?”

Or more pragmatically: “Do you know all the indicators in your MEL Plan?” “Do you or your manager look at the data after it has been reported?”

Eventually we arrive at: “Does this MEL Plan help you with your daily work?”

Invariably, to all the above, the answer is: no.

2. Not Meaningful to Program Team

If a program team’s MEL Plan does not add value to their daily work, it is not meaningful for their purposes. If this same team considers itself accountable to the Plan’s stipulations yet lacks the sense of empowerment to revise these, it is a burden.

Claiming MEL Plans are burdensome is not novel. Groups and individuals have for some time acknowledged inefficiencies in foreign assistance reporting (via the MEL cycle):***

We are collecting 150 indicators for (our donor). How would we even begin to analyze 150 indicators? Are they analyzing this data in some way?****

Initial research and commentary focused on naming inefficiencies and their repercussions, while more recent initiatives, coupled with policy (after policy), have sought to drive solutions.

My opinion is too much attention is paid to whether our sector can find the needed time and resources to comply with reporting, and not enough attention paid to the value of reporting itself.

There could stand to be less reporting altogether. I sincerely doubt the majority of actors involved in the MEL cycle refer to their reported data (as is) in validating their program, nor do I believe much would be lost — or that many would care — if program teams casually eliminated, say, 90% of reporting metrics from their MEL Plans.*****

Throw out the Bathwater

Program teams should consider a true overhaul of their MEL Plan, one that reviews on an ongoing basis the designated means to validate the causal pathway, and either confirms or rejects their utility.

My feeling is many elements of common MEL Plans would be laid aside: under- or non-utilized reporting requirements that do not bring much in the line of insight; data quality procedures unaccompanied by a provided increase in line managers’ resources; end-line evaluative activities that call for significant expenditure yet would conclude well beyond program close to be of consequence. These elements may be seductive to business development teams — they read to be comprehensive, and do not add much to the initial budget line — but program teams know firsthand the strain to their time and capacity.

Creating a culture of validation and sensemaking takes time, but the chance to reduce burdens and develop a meaningful system outweighs this.

Let’s be brave. Let’s build a line item for multi-day MEL Plan revision directly into the proposal budget (while at it, why not articulate this expectation directly within the program solicitation?). Let’s recruit a program team eager to own this process.

Let’s challenge the inherent wisdom, much less pragmatism, of retaining a MEL Plan developed multiple years prior by a business development team which likely never thought twice about the program the moment after ‘SEND.’ Let’s acknowledge program staff and their immediate donor counterparts rarely if ever dust off the Plan beyond the reporting period — and only then to refresh their memory of the data points to which they are beholden.

Most importantly, let’s acknowledge that all too often MEL Plans are a net burden to program teams, and do not in a significant way bring value to day-to-day implementation.

So, please, do throw out the bathwater, assuming it has gone cold.

_________________________________________________________________

*It is quite possible the MEL Plan was developed the day before the proposal was due, by a severely strained in-house Business Development Manager, evoking theory and evidence copy-pasted from the solicitation language. Or the MEL Plan was indeed developed by an in-house team of MEL specialists, who were roped into the proposal two days before submission and handed an outline which they spruced up using a MEL Plan template previously used on a winning proposal, before returning to the Business Development Manager for rough estimates of partners and stakeholders to factor as targets. Or, perhaps, the MEL Plan was a collaboration of MEL staff and the business development team from onset, incorporating a daylong Theory of Change workshop and primary research on paradigms of ‘outcomes’ and ‘results’ within the prevailing sector and environment. Or some combination of it all.

**I sense less resistance by program teams in regards to work planning. Unlike the MEL Plan, Work Plans are grounded in implementation and thus more reflective of a team’s intimate knowledge, not to mention daily logistics. Besides, Work Plans are the purview of the Chief of Party and Program Managers, who unlike the team’s MEL staff are more likely: 1) expats; 2) current or former members of the implementing organization; 3) engaged at the earlier stages of start-up, when plans are somewhat less concrete.

***Foreign assistance planning reached a milestone with the consolidation, in 2006, of the State Department and USAID’s individualized measurement and reporting functions under a single Office of Foreign Assistance Resources (F). There was and continues to be contention over whether this move has streamlined or burdened the overall process.

****Anonymous statement from an implementing partner (Lugar Center, MFAN; 2017).

*****Standardized data commissioned by Congress, filtered down as federal agency requisites, and called for and populated across programming solicitations and drafted MEL plans are too proximate to daily implementation. These indicators count the number of activities held, partners and stakeholders involved, products created and delivered, and resources harnessed. Other indicators are too distant, asking for a one-time snapshot of the program’s lasting influence over stakeholders. Combined, the data points reflect upon what teams are doing, but does not shed light on just how they are (or are not) doing it, the main dynamics and drivers behind implementation — factors which, if revealed and assessed, bring true utility to a program team. These dynamics and drivers can be thought as the ‘missing middle’ to a truly meaningful MEL Plan: ongoing assessment of how exactly the program team, in this setting, under this context, levering its relationships and experience, utilizing specific levers and overcoming specific barriers, will uniquely deliver the program.

--

--