On Metrics, Data, Information, Science, and How We Use Them – Part 1

On Metrics, Data, Information, Science, and How We Use Them – Part 1

 

16 January 2024

 

By David Allen, Development for Conservation

 

Several years ago, I put together a workshop for Rally on Metrics. Fundraising Metrics, that is. I was and continue to be interested in learning more about what we measure in fundraising and how we use that information to make decisions.

What I have learned is mostly disappointing.

We tend to collect a lot of data, but we don’t necessarily use it to help us make decisions. As much as we pride ourselves using science in our mission work, we do not use science in our operating environment, and particularly not in Fundraising.

I am still interested in your thoughts about these. Please help by using the comments section below or emailing me directly (DavidAllenConsulting [at] Gmail [dot] com).

 

We collect a lot of data but not a lot of information. For example, we count how many new members join every year, but we don’t track where they came from. We know how many people “like” our Facebook page, and we know the open rate for our eNews, but we don’t know whether or how either is related to giving. We know what our average gift is, but we don’t know whether it’s trending up or down or why.

Many of us have goals related to increasing membership to a specific number or by a specific percentage, but the goals are stuck out there with no strategies attached. The number of members we have is related to the relationship of our attrition (renewal rate) and our recruitment. Are we gaining more than we are losing? We can increase our membership by improving either or both of those numbers. But beyond that, we don’t have a clue about what to do or what will actually move either of those needles. Or even if what we’re doing now will scale. Or what scaling will cost.

Will we have enough money every year to fully operationalize our Strategic Plan? Who knows?

Most of us are far more likely to add some arbitrary riser (like plus 5%) to last year’s budget numbers and limit our ambition to that projection, than we are to actually consider what might be needed for our Strategic Plan.

Worse in some ways is that we fudge data to make us look better. For example, we aren’t honest about how much fundraising events cost. We don’t include the value of in-kind contributions on the expense side. We under-count staff time or don’t include that cost at all. And we don’t include the fulfillment costs of auction items. (And don’t even get me started on the opportunity cost of not spending that same time on activities with more significant ROI.)

Here’s another example: we count donors or members (and calculate renewal rates) as anyone who has given in the last 15 months – sometimes 18, and sometimes 24 – instead of 12. The reason to count members and calculate renewal rates should be to project income. Because we don’t budget on a 15-month or 18-month basis, these data points are useless. Or we somehow consider members differently than memberships or households. The number of voting adults in a household is only important if each one of them is writing checks.

 

I have also found a reluctance, in general, to use science in our fundraising operating environment. This seems ironic for science-based organizations. Science reminds us that the systems at play in our universe are dynamic – always changing. Science reminds us that we are at our most vulnerable when we charge ahead with conviction – in the wrong direction. Science reminds us that often what we accept as fact is only theory – and theories need to be constantly challenged and tested.

For example, we embed color photographs in our fundraising letters, but we’ve never actually questioned whether this will help us raise more money. (It doesn’t.)

We prioritize asking Board members to write notes on letters, but we don’t actually track whether or not this works. (It does, but signing the outside of the envelopes and making phone calls are both more effective.) Plus, most of the time it’s random – we don’t track the notes or ask each Board member to write to the same people each year.

We reach for matching gifts under the assumption that having a matching gift available will help drive increased giving, but we actually don’t know that. (It does.)

And we don’t know whether the amount of the match matters – 1:1, 2:1, 5:1, 1:2. (It doesn’t matter.)

And it never occurs to us to ask whether there is a downside. (There is some evidence that training donors to expect their gifts to be matched actually depresses long-term giving.)

But don’t take my word for all this. Be a Scientist! Test it!

A Scientist would start with one of these theories, review the published data that is available related to it, and set up an experiment to test whether it’s accurate. A Scientist would make a living by being curious about whether there’s a more effective way.

 

What questions are you asking of your data? Here are a few suggestions:

  • If I didn’t change any of my fundraising strategies, but instead kept doing exactly what I am doing now, how much money would I raise next year? How much could I project raising five years from now?
  • Are my current strategies sustainable? Are they vulnerable?
  • Are my current strategies scalable if I need to raise more money? If I took one of my strategies and doubled it, would I double my net revenue from that activity? What about ten times? What about for each of my strategies?
  • Am I doing enough to meet the goals of my Strategic Plan? If not, what else could I be doing? Or better yet, what should I NOT do, moving forward, to make room for something that might have better promise?
  • When and how do I make the decision to hire staff?
  • Does out-sourcing make sense?
  • Am I spending enough money on Fundraising?

 

As we head into 2024, I hope you challenge your own assumptions and test whether there might be a better way. Work at getting data that will help you tell whether you’re on the right track or not. Count things that matter. And prepare yourself for making changes based on that information.

Next week, I’ll give you five metrics I think you can use, and detail what I think they can tell you.

 

Cheers, and have a great week.

 

-da

 

PS: Your comments on these posts are welcomed and warmly requested. If you have not posted a comment before, or if you are using a new email address, please know that there may be a delay in seeing your posted comment. That’s my SPAM defense at work. I approve all comments as soon as I am able during the day.

PPS: I’m on vacation this week. Much of the content in this post was originally posted in September, 2016. It has been updated.

 

Photo by 12019 courtesy Pixabay

 

 

Share this!
5 Comments
  • ashley721f6e9288
    Posted at 10:39h, 19 January

    Another helpful thought-provoking post. Thanks, David. You mention signing the outside of an envelope as being helpful with appeals. Is this just because it makes the envelope stand out? If we typically put a short 1-sentence note on the letter, should we consider putting that on the outside of the envelope instead? Would love to hear more about this idea.

    • David Allen
      Posted at 18:02h, 19 January

      Ashley,

      Thank you for writing and thank you for the question. “Lift notes” are effective when written on the letter and are particularly effective when they come across as authentic and personal rather than generic. However, they ONLY work if the envelope is actually opened. If it isn’t opened, the lift note is useless. Anything you can do to encourage the recipient to open the envelope, lift note inside or not, will improve results. Such as penning a lift note on the envelope, signing your name through or underneath the return address, printing a provocative graphic, or even using a different size, shape, or color for the envelope. The more people open it, the more results you will get.

      Cheers,

      -da

  • Sally J Cross
    Posted at 13:38h, 16 January

    Another question: Are the current tactics we use for fundraising (mail, email, social, events, giving days, etc., etc.) well aligned with our strategic plan?

    Are there tactics we ‘don’t have time/bandwidth for’ (legacy gifts, 1:1 donor engagement) better aligned?

    Changing tactics or emphasis (like dropping a popular, but draining event or activity) in favor of others may cause some short term pain, but better serve the organization in the long run. The Agitator blog focuses on data informed fundraising tactics and results: https://agitator.thedonorvoice.com/award-winning-blog/

  • Jim Perry
    Posted at 08:34h, 16 January

    OK, David, you laid down the challenge for comments, so here you go.

    1. If you are advocating for the use of science, then don’t be a rookie and misuse the term “theory”. For a scientist this is worse than grating your fingernails on the proverbial chalkboard. (I guess whiteboards would not work.) Use the correct term — hypothesis. Now, go look up the scientific definition of theory.

    2. The basic premise of your argument is quite likely correct, but you fail to ask the basic question of why there is a lack of genuine scientific analysis, or maybe follow-up to whatever analysis takes place. My hypothesis is that it requires so much time that it takes away from composing the messages, making the asks and following up with donors. Now, do a scientific analysis and find out the why.

    3. Color pictures don’t increase giving, matching campaigns increase giving but the level of match does not matter. Ok, cite the science behind those data. Put the reference in the post.

    Readers of these comments may think them to be harsh, or maybe not Midwest Nice. They are not meant to be, because you offer excellent food for thought and challenge us the way we need to be challenged. And I frequently pass on your suggestions to the committee I co-chair. But science, even social science, can be a bit ruthless. It comes with the territory. We’re all in this together. 🙂

    P.S. I plead guilty for my organization.

  • David
    Posted at 08:22h, 16 January

    My resources for micro analysis of data is pretty limited to things like retention rate, average gift amount, number of gifts, source, etc. So I mostly act on the macro data from this blog: 4-page letters, donor-centric stories, and so on. It’s helped a lot. Thanks! The bit of data crunching we have done show that donor happiness (retention) and average gift are key strategies. We see it in the, uh, data.