LIFE, THE UNIVERSE & EVERYTHING: NONPROFIT PERFORMANCE MEASUREMENT

NonProfit Performance Measurement
by pam ashlund

I am fascinated by how quickly legislators determine our agenda. The latest example? Evaluations and Performance Objectives.

Anyone remember the way the introduction of the Performance Objective was heralded in to the non-profit world? "This will make you more effective", they told us. "This will validate our work". Blah Blah Blah. I, for one, drank the kool aid.

Then came the truth, somewhere between the ideal and the reality, the stats didn't change anything. Now a huge amount of staff time (not to mention trees sacrificed) is devoted to counting numbers, which go nowhere. Consider the following three problems:
  • There was no funding for a decent evaluation;
  • Evaluations take good research design;
  • Nonprofit administrators learned how to game the system by ever reducing their goals so that they might be achieved (after all you WILL get penalized if you don't meet those goals).

If that wasn't enough to drive a stake into the heart of performance objectives...how about asking some hard questions:

Example Goal: Ex: Reduce Poverty in Lincoln Heights

  • How much paperwork will it take to state and measure your goals?
  • How will the results you gather further your goal?
  • How will you pay for the evaluation?
  • How will you determine your research is valid (sample size, bias, etc.)?
  • Are your goals really goals or are you counting heads?


Technorati Tags: , , ,

Comments

I heartily agree, but am concerned you have left out one key area:

Even if an organization had the ability to design and manage an appropriate and accurate evaluation consisting of a statistically meaningful sample, what are the realistic chances the report will be read by anyone who would actually understand the data?

What is the likelihood right now that any conclusion backed by "numbers," (no matter how meaningless accepted statistical normst would find them,) will be accepted as valid so long as it is quantifiable. What is the chance that anyone reading this very same report will have the sophistication to question the validity of what is measured? So if the bull shit that is measured in the Lincoln Heights example is defined (by someone with no idea what might actually eliminate or even ameliorate poverty in Lincoln Heights) as completion of a set of classes or training, and 7 out of 10 people enrolled complete the training, a legitimately expressed success rate of 70% will be used for utterly bogus conclusions. Is a discrete set of classes provided to 10 people likely to have a measurable effect on the overall poverty in the region? Not logically, so if this is a legitimate claim (that the classes were transformative) it would take much more complex data covering an extensive period of time and measuring a significantly larger number of people and elements than just the 10 enrollees and the 7 individuals that completed the class. Do program monitors, auditors or heaven forbid the politicians they are answerable to generally have high level statistics skills? Not bloody likely. Even accountants who are auditing program files and related outcome documentation, though used to working with numbers are unlikely to have ever taken a class in social science research methodology, which afterall is what these evaluations rest on. I must stop here to snurf up my sleeve, and roll my eyes.
Lovekandinsky said…
There's a really excellent article on evaluation at:

http://tinyurl.com/yy4gc6

It gets at a lot of your points here--worst of all, most of the time your results don't really matter to foundations anyway.

Popular posts from this blog

BIRTH OF A BLOG: GUERILLA MARKETING, EFFECTIVE USE OF TECHNOLOGY & GOOD CONTENT WIN THE DAY

THE LIES PEOPLE TELL US (AND THE ONES WE TELL OURSELVES)

THREE NONPROFIT BLOG FAVORITES