Posts Tagged ‘effectiveness


Results reporting: Momentum for Non-profits to Demonstrate Effectiveness

These last few years, watchdog agencies and donors have increasingly created demand and interest in results reporting for Non-profits.  In January, 2013, Charity Navigator released its final rating approach for results.The focus is on a charity’s design, monitoring, and evaluation with the idea charities who design and evaluate programs transparently will be more accountable and more effective.  CN’s new rating approach is not an A vs. B rating, comparing one charity to another, but rather rating the individual charity to a yard stick and assessing how they measure up to CN’s expectation.  This has the potential to push charities towards more effective methods, and also draw out evidence which, until now, was not provided publically.

Recently World Vision released its own Impact reporting 1.0 – a first version of webpages which publish impact in terms of outputs, outcomes, and its unique approach to programming.  It’s a step in a direction to provide visibility to the public not only on what it recently did, but also how it has evolved over 60+ years of ministry.  These webpages are not perfect, but it’s a large step in a new direction, and one I hope empowers greater effectiveness in the future.


Be Creative

It is very easy as an accountant to lose sight of the big picture.  For those of us that work in U.S. or in another country away from the primary ministry as a part of a large organization, we generally work in cubicles, take few if any trips to “the field,” and have little interaction with the ministry of the organization other than anecdotal stories and expense invoices.  For those that do work in ministry countries, you may also be living in the capital city, working in a cube, holding a staring contest with a computer monitor.

In our world of Audit Risk Alerts and GAAP compliance, we can easily begin to think that following accounting guidance and producing sleek financial statements is the end goal.  We lose sight of the forest for the trees.  We push compliance to the detriment of the efficiency of the organization and the sanity of our coworkers.  Compliance is important.  GAAP must be followed.  I agree with both of those statements, and work hard at my organization to ensure that both stay true. But as accountants, and generally back office staff, we must work hard to remember why we are doing this as our visibility is limited.

Our ministry staff are working hard, overwhelmed with the burdens that I’m sure they face on a daily basis.  Our organization works internationally, with the poorest of the poor.  Our staff work in areas like South Sudan, and the DRC.  They are working with child soldiers, starving families, AIDS orphans, and victims of water-born illnesses.  In the midst of that they must also file expense reports, donor reports, grants reports, etc.  The question I ask is, how do we make this easier for them?  What can we do to help them have more time to concentrate on the program work they are doing?

I think accountants have the opportunity to be the most creative people in the organization (and in a good way, not the go-to-jail-for-fraud way) and this is why:  Being creative in an open space is easy.  It’s easy to “think outside the box” when there is no box to begin with.  Therefore, people in marketing and art and design departments of organization often get the credit for creativity.  In accounting and auditing, we are in a very constrained box.  To develop a truly creative, innovative idea that fits within the box we must use, we must be really creative.  We must think creatively inside the box.  This higher level of creativity is what gives accountants the opportunity to be the most creative employees.

By being creative in how we work, we can think about the end goal: the ministry and programs of the organization.  We can redesign processes, cut out unneeded burden, keep our financials in compliance so there are no unintended consequences, and keep our program workers focused on what they really need to be working on.

In conclusion, I challenge myself and the readers in two things:

(1)  Get to know your programs.  Invite members of your programs staff to speak to your accounting department.  Volunteer in local programs.  If possible take trips to field sites.  Ensure that you and your staff know what the organization does and what you are all working for.

(2) Be creative.  This isn’t something that is usually said to accountants, and it comes with a word of caution.  Don’t get so creative that you lose site of the rules and regulations that we must follow.  Getting your organization in trouble with auditors or the IRS can greatly diminish the impact of your organization.  But find ways to innovate inside the box.


Overhead rate is a poor measure of efficiency

There is no financial metric more scrutinized in the not-for-profit world than overhead percentage. As a result I am always hesitant to write on the topic because any article on overhead tends to be perceived as one of two messages: “Overhead rates are too high, and not-for-profits are not to be trusted” or “Overheads don’t really mean anything, so stop trying to compare organizations and just give us money anyway”. The truth is that organizations should welcome comparisons to peers. Such comparisons allow donors to make wise decisions, which results in funds flowing to the best managed not-for-profits. But, is overhead the best measure of quality management?

I like to point out that the effectiveness (not efficiency) is usually a donor’s primary concern when giving. At least this is true for me. Of course I would like the organizations to which I give money to be both effective and efficient. However, if you made me choose, I’d rather an organization make an inefficient but real change, than operate efficiently but fail to make a substantive change with their programs. The problem is that effectiveness is difficult to measure and much harder to compare across organizations. How do you compare teaching a child to read, saving a forest, and preventing a disease through immunization? In which case does a dollar achieve the most good? And even more challenging; how do you compare program quality across these categories? Absent comparable effectiveness measurements, donors and not-for-profits turn to overhead as a measurement of quality management. Overhead doesn’t measure effectiveness, but at least it measures efficiency . . . or does it?

I am participating in a project to measure the efficiency of the finance function across many of our organization’s ministry national offices. To do so, we’ve defined some efficiency metrics which allow us to compare our offices. These metrics include items such as cost per paycheck generated, cost per invoice paid, cost per employee expense report, etc. The common theme in these efficiency ratios is that the cost is divided by the outcome achieved. This allows for meaningful comparisons across offices, and useful evaluations of potential process improvements.

In doing this work I was struck by how different an overhead rate is from the efficiency metrics we are using. Overhead is not the measurement of cost against an outcome (cost per life transformed, cost per tree saved, cost per beneficiary trained), rather it is a ratio between types of costs (percentage of costs which are general and administrative, as compared to costs which are directly related to programs). This ratio among costs fails to capture actual efficiency and can lead to some surprising results. Consider a food shelter that is able to replace hired food servers with volunteers. The food servers are directly related to program activities, thus when they were paid the costs were programmatic. Removing these program costs increases the shelter’s overhead ratio. This works in reverse as well. Imagine a charity finds three vendor bids for a product needed for distribution in its programs. The organization could reduce overhead by intentionally purchasing from the more expensive vendor.

Now, these examples may be a bit of a stretch. However, I think you see my point. The overhead ratio rewards inefficiency in program costs, which are the majority of most not-for-profits’ costs. I certainly do not believe organizations are intentionally choosing inefficient program costs to manipulate overhead. But, I think it is possible that the focus on overhead rates can blind management to potential efficiency gains in program costs. Ironically, if donors and managers have been focused for years on managing to a low overhead rate, many organizations may have already realized the big efficiency wins in management and general expenses. For these organizations improvements in overall efficiency may yield higher overhead rates.

The strongest advantage of overhead as a metric is that it can be used to compare different types of nonprofits. Hospitals, schools, conservation groups, and homeless shelters all can be compared on overhead rate. Unfortunately this strength breaks down on more detailed inspection. One of the more interesting things I have learned since I started working for a nonprofit is that overhead closely corresponds with the type of nonprofit organization (or at least their funding source). Organizations with a large GIK component to their ministry often have very low overhead rates due to the value of the goods they distribute. Child Sponsorship organizations tend to have higher overhead rates due to the additional administrative effort required to connect each sponsor and child. (There are arguably programmatic and stability advantages to this higher cost). Grant funded organizations tend to have overhead rates which fall in the middle. In other words, the type of donations received can have a bigger impact on overhead rate, than the quality of an organization’s management.

So, what should we be measuring? There is clearly need for comparisons among not-for-profits. There are also benefits from these comparisons both for donors and management. But there are clear flaws in overhead as the comparison tool of choice. I think it would be wise to evaluate similar types of not-for-profits based on a grouping by mission (for example Relief & Development, Conservation, Medical Research, etc.). For each grouping an efficiency measure could be determined by dividing total costs by a common outcome metric. For example, animal shelters could report costs per animal served. Then efficiency could be better gauged for similar organizations.

I also think that breadth of analysis can be a solution as well. Part of the problem with overhead is that it is often viewed as the defining, authoritative metric. If other metrics are considered as well (unrestricted undesignated net assets, growth rate adjusted for organizational size, liquidity, etc.) a more complete and useful comparison emerges.


Outcome reporting and Failure Reports

Is the future of non-profit reporting changing?

Historically, non-profit reporting made available to the public includes among other requirements: a mix of financial results; significant accounting assumptions and disclosures; governance and accountability; and program outputs. Non-profit watchdogs and accountability organizations like the BBB and the IRS typically have their own reporting requirements. However, these requirements tend to focus on financial results rather than outcome reporting.

Outcomes differ from outputs in that outputs represent products or services produced. Outcomes are the achievements or effects and changes resulting from the outputs. For example, a child’s attendance at school is the output of an education program. Children learning reading skills are the outcome which results from school attendance.

More and more, I hear discussions about how non-profits should be assessing and reporting outcomes. Charity Navigator is implementing several phases to revise its charity rating system (CN 2.0). The last phase expects charities to disclose information about their results. Not their financial results, but their programmatic results. In this video Ken Berger talks about the continuum charities are on, the evolutionary process that makes for high impact organizations. The sense that I get is that it’s not about being perfect; rather it’s about moving past yesterday’s errors and learning from past shortcomings.

Taken a step further, I’ve recently seen several non-profit organizations preparing “failure” reports. These reports openly describe a few instances when an organization has not done well on a project, or even failed at it, as well as lessons learned from these experiences. These reports go as far as identifying several changes to be made in the future.

Both Engineers without Borders Canada and the Robert Wood Johnson Foundation publish “failure” reports. Even though it exposes some of their failures, I come away believing their next dollar raised will be used even better because they have learned from their mistakes. They are moving down a continuum to not only gather data and assess outcomes, but also incorporate their outcome assessments into their next strategy.

Are failure reports the latest trend in non-profit reporting or a useful tool for meaningful learning and innovation? If failure reports are not done with the right motivation, in the right way, they could have negative effects. Here are several opportunities, advantages pitfalls and disadvantages to consider about whether or not to prepare a failure report:

• The biggest opportunity is to learn, then invest, then develop new programs which are more successful long-term. Improvements and innovation can become part of routine assessments instead of incidental happenstance.
• Reflection, a process so often overlooked in the tyranny of the urgent, is instead prioritized. Resources are invested to intentionally assess and evaluate outcomes and failures.
• Organizations have an opportunity to learn from each other’s mistakes through increased visibility of past failures and lessons learned.
• Organizations have a better opportunity to avoid recurring failures.
• Donors have an opportunity to see which organizations are making improvements and moving down a positive continuum.
• Publishing failures and planned changes creates a public accountability partner to push an organization to prioritize the necessary follow-through.

• If the report is created out of the wrong motivation, it may be easy to fall in the trap of surface level, cleansed, or “PR friendly” assessments. (kind of like in a job interview when you say your biggest weakness is working too much…)
• Donors who can not maturely accept failure or who do not see enough improvement may not give to the charity again.
• Current donor perception is often driven solely by financial results. In order for failure reports to be successful, the public perception of charities and their values must be changed to value innovation and effectiveness.
• Many non-profit industries are typically expected to spend all of their donated funds immediately. This expectation must be bucked to allow time for innovation and investment in newer, more successful project models.

It’s always been important for donors to get assurance about financial results, but the interest in reports on organizational programmatic effectiveness, or its lack thereof, is increasing. If we can’t honestly measure, report and analyze program effectiveness, we won’t gain the important lessons that provide for the greatest success.


October 2018
« May    



Online Accounting Degree blog feature

Awards badge