5 Tips & Tricks for Gathering Meaningful Data to Drive Improvement

April Turner, Student and Intern, North Carolina Center for Nonprofits

Evaluation is arguably one of the most important ways to drive improvement. It provides valuable information on what can be changed to improve the services provided. Evaluation can also be used to gather important information to inform other program offerings and services. While evaluations are the crux of driving improvement, it is not always easy to get them right.

Many organizations face difficulties in gathering sufficient meaningful data, or in other words, they may face the challenge of low participation rates. As more and more evaluations are put out into the world, it is becoming more difficult to keep users engaged to ensure the collection of essential data. Without further ado, we’ve done some research and gathered up the following tips and tricks to help you get the most out of your evaluations. Keep in mind, these tips are not exhaustive but rather a jumping off point. We hope they help as you prepare evaluations that work as hard as you do to deliver programs and services that have a real and lasting impact.

Tip 1: Purpose

Evaluations are created for a reason: to generate feedback that informs improvements in programs/services provided. Not only do evaluations have a purpose, but data does too. As you create an evaluation, consider the purpose of each question you ask. Data should only be collected if it is meaningful, in the sense that it contributes to the desired outcome. If you do not intend on doing something with the data generated by a question, consider eliminating it. If the question would be insightful or provide meaningful data that may not fit within this particular evaluation, consider adding it to a question bank! You can keep a question bank running of things you would like to learn from your programs/services users at a later time, when it may be a better fit for a future evaluation.

Questions to consider:

  • How does this question relate to my organization’s desired outcome?
    • Can I explicitly state what is going to be done with the results of a particular question? 
  • What will the responses inform?
  • Do I need to know the answer now, or can it wait (would it be better used in a different evaluation)?
  • Does the question relate to the overall evaluation topic?
    • Does this question fit in with the other questions?

Tip 2: Outside Eyes

A lot of time and energy goes into the process of creating an evaluation. As you get closer to the end of the development phase, it can be hard to step back and make sure that your questions will make sense to participants. Having an extra set of eyes from someone not involved in the evaluation process will help ensure that questions are clear and make sense. If you really want to ensure that your questions are clear and that you will get responses that are most beneficial to your outcome, have someone from your sample population look over the evaluation, or better yet give it a test run.

Questions to consider:

  • Has someone else reviewed and/or tested the evaluation?
    • Could I have someone from the sample population review the evaluation?
    • If not, has someone at my organization reviewed and/or tested the evaluation?
  • Did they understand what the questions were asking?
    • If the question was open ended, did they respond in the way I/we intended?

Tip 3: Respect

Evaluations take time, both for those developing them and those participating in them. It is important to be mindful of the time that goes into making evaluations, taking evaluations, and analyzing evaluations. You want to respect both the time of the participant and those involved in designing and analyzing the evaluation, especially if the task of developing and analyzing the survey is being put on a staff member in addition to their typical responsibilities.

Avoid biting off more than you can chew. If you create an evaluation that is dense and time consuming, it will take your organization some time to analyze the data. Additionally, dense and time-consuming evaluations may mean that it takes participants longer to complete them. If they do not have the time to do so, they are less likely to participate, thus reducing your participation rate. Similarly, if your evaluation largely or solely consists of open-ended questions, it will take even more time for both participants taking them and organizations analyzing responses in order to pull out meaningful trends.

Questions to consider:

  • What is the bandwidth of your organization to analyze the results of the evaluation?
    • Is your organization able to review evaluations that contain several open-ended questions?
  • What is the bandwidth of those completing the evaluation?
    • Do you think they have time to complete several open-ended questions?
  • Can I modify some of the open-ended questions to make them yes/no questions and still get meaningful insight?

Note: It may not necessarily be feasible to make evaluations shorter or simpler, and that is okay. If possible, express your gratitude for participants dedicating time to complete the evaluation by compensating them for their time. If your organization cannot compensate all participants, enter participants in a raffle to win a gift card or other form of compensation.

Note: Evaluation logic can be a useful tool for asking questions that may not be applicable to all participants. If your evaluation is digital, you can have the survey platform skip to a specific page based on a participants answer. If your evaluation is on paper, you can simply tell participants to skip the question or write “NA” if it does not apply to them. This will reduce the amount of time that participants spend completing the evaluation.

Tip 4: Question Type

The kinds of questions (open-ended vs. yes/no) will depend upon the kind of data that would be best for analytical purposes, and the audience with whom the results will be shared. The question types you use will be driven by the kind of data that will best contribute to your desired outcome and presentation. It may not be possible to get sufficient information by using only yes/no questions. Open-ended questions should be implemented with discretion and sparingly to ensure analysis of the data is not too cumbersome.

When designing your questions, also consider whether you are intending for a specific kind of answer. Participants may not respond how you intend them to if questions are open-ended; participants should not be tasked with accurately predicting what kind of response would be best for analysis. If you need a certain kind of response, focus participants’ thinking by providing examples. This allows them to share their thoughts and provide data that will be most beneficial for analysis.

Questions to consider:

  • In order to achieve my desired outcome, will I need quantitative data, qualitative data, or a combination of both?
  • To present the data, will I need quantitative data, qualitative data, or a combination of both?
  • Can I get insightful information by asking yes/no as opposed to open-ended questions?
  • Do I need a specific kind of response to analyze the data for a given (open-ended) question?
    • Do I anticipate participants responding in such a way?

Tip 5: Candor is Key

Evaluations help drive improvement if, and only if, constructive feedback is given. If evaluations only generate positive feedback, your organization will not know if there are any components that need modification or improvement to increase impact. It is important for participants to feel welcome and comfortable sharing feedback, both positive and constructive. Explicitly invite your participants to be honest in their responses and remind them that their responses are what help to drive improvements for the future.

Questions to consider:

  • Does my evaluation include opportunities for participants to provide feedback?
  • Does my evaluation encourage participants to be honest?
  • Does my evaluation remind participants of their role in driving improvement?

This list of tips and tricks is by no means exhaustive. Nor will each tip and trick apply to every situation. Evaluation methods are continuously evolving, as well as ways to increase engagement and participation among users. We hope that these tips and tricks will help you to make meaningful evaluations and bring you closer to achieving your desired outcomes.

April Turner is a senior at the University of North Carolina at Chapel Hill pursuing a bachelor of arts in public policy. She currently serves as an intern with the North Carolina Center for Nonprofits. April is passionate about the nonprofit sector and its commitment to uplifting the community and hopes to pursue a career in consulting.