• Trial advocacy training materials

    • Legal professional development

Reporting Feedback Data: Top Box v. Average Scores

How we report the results of survey data is an important and subtle component of how we evaluate program and association success. While lots of different satisfaction scoring scales exist (1 to 5, 1 to 7, or 1 to 10, etc.) most associations and PD providers use one of two methods for interpreting and reporting data: the arithmetic mean (or average score) and the top box score(s) (or percentage of respondents who gave the highest ratings). What you get from each method, and the consequences of that analysis, can be quite different, so choosing the method that best aligns with your survey and organizational objectives deserves some consideration.

Top box scores. Top box scores represent the percentage of respondents who gave the best responses (on a scale of 1 to 10, either a 10, or a 9 or 10). Possible percentage scores range from 1 to 100. Top box scores are easily understandable because they clearly identify how many people fall into a certain category, for example, very happy or happy. Organizations understand the difference between 76% of respondents being very happy or happy, and 69% of respondent being very happy or happy. The downside of top box scores is that they throw some data away – the middle – but also, if the bottom box numbers are not shared, important low box data. High top box scores, by themselves, can eclipse low box scores that may signal a problem. For example, if 20% of respondents were very unhappy, wouldn’t you want to know that?

Average scores. With average scores, the mean is calculated by summing all responses and dividing by the number of responses. On a scale of 1 to 10, possible scores range from 1 to 10. Average scores are easy to calculate, and take into account the full range of responses, from very unhappy to very happy, so provide the best overall statistic of the typical rating (especially valuable for year to year comparisons). On the other hand, average scores provide less understandable data (especially within a single event) ie. what’s the difference between an  8.1 and an 8.5?

So which metric to use? Ultimately, the answer depends on your priority – statistical precision or audience comprehension? A combination of the two metrics is the most honest, understandable and useful approach, often presented as a description of top box scores in the highlights or executive summary, with average scores later on. This blended approach may require more effort but it ensures that the organization and the reader receive all the data, and are therefore better able to act on the responses to make improvements.

Regardless of which interpretation and reporting method you choose one thing is certain: alternating between approaches when it suits your purpose – especially if your goal is validation versus positive change – will raise long-term credibility problems. Objective analysis and reporting of survey results are just as important as proper survey construction.

Improving the Quality of Your Feedback Forms

Knowing how to collect and use feedback is an essential part of a PD professional’s tool kit. Every time you deliver a program you make choices about speakers, topics, delivery formats, materials and other elements that you hope will contribute to a great attendee experience. Effective PD people are always innovating so you need reliable evaluation methods to accurately identify those elements that deliver and those that disappoint.

The best evaluation forms do not take long to complete, measure behaviour or knowledge as well as attitudes, and answer three key questions – what worked, what didn’t work, and what can be improved next time. This post is about how to design feedback forms to get the kind of information you need to improve programs. For suggestions about how to get more responses to a well-designed feedback form, see last week’s post, Practical Tips for Improving Feedback Response Rates.

Here are 10 basic principles for designing helpful feedback forms:

1. Keep the feedback form short. It should be a maximum of one page and take no more than five minutes to complete.

2. Only ask attendees about what you can change or improve, for example, if the program has to be delivered in a particular location and you can’t do anything to change or improve it, asking about it wastes attendees’ time and loses an opportunity to collect meaningful information.

3. Tie evaluation to program objectives and desired outcomes. What are you trying to assess – content, teaching methodology, delivery format, price point, networking opportunities? Ask attendees about what you want to evaluate.

4. Ask attendees about learning as well as attitudes. Questions about attitudes measure perceptions and feelings, for example rate the quality of the program or presenter. Questions about learning measure behaviour and knowledge, for example, did you understand the program content or materials? Do you feel you can apply what you’ve learned?

5. Mix closed-ended multiple choice questions (easy and quick to answer, and enable comparison of different programs but are not particularly informative for improving a particular program) with open-ended questions that ask for comments (require more time to answer but are capable of generating specific, informative feedback).

6. When using closed-ended multiple choice questions, choose numbered responses (rate the quality of the program materials on a scale of 1 to 5, 5 being excellent and 1 being poor) or labelled responses ( I learned new information that will help me in my day to day practice –  1 strongly agree to 5 strongly disagree.) Clearly define the numerical rating for each response and make sure that the scale is visible, and that the questions fit the responses.

7. For closed-ended multiple choice questions, offer five responses to avoid stilted results that can happen with too few choices or paralyzing participants with too many choices. For more on this topic see my future posts on Quick Tips for Feedback Compilation and Analysis, and Top Box s v. Average Scores.

8. Make open-ended questions as specific and focused as possible so attendees don’t feel obligated to write an essay. For example, what are the three most important things you learned today? What was your favorite part of the program? What content was not covered in the program that you would like to see included next time? Provide adequate space for attendees to share their suggestions.

9. Reserve two response options (yes/no) for questions regarding future actions ie. will you recommend the program to a colleague, or, would you attend an advanced version of this program? Include a small space for additional comments for maybe’s and qualifications.

10. Include a question about the action attendees are willing to take, for example recommending the program to others and whether the attendee will return next year. Both of these are very useful indicators.

Properly conceived and executed, feedback forms are powerful tools for change and innovation. Make it your mission to design user friendly forms that capture the best and worst parts of your programs, and then use the information you receive to make your programs better.

Practical Tips for Increasing Feedback Response Rates

A lot of clients I work with still favour end-of-program paper evaluation forms. Call me old fashioned but I like paper forms too. Among other advantages, hand-out surveys, as they are sometimes called, let you personally communicate the importance of providing feedback, allow you to capitalize on the fact that participants are physically available, provide an instant check on your assessment of the program, and work for big and small budgets alike.

Assuming that you have crafted quality questions (see next week’s post, Improving the Quality of your Feedback Forms), response rates for hand-out surveys tend to average in the 40 to 50% range for most CLE providers (usually closer to 40%). I think those numbers are too low. My experience is that the smaller the program, the higher the response rate. So, for example, in a hands-on workshop limited to 12 participants, it’s not unusual to see a 91% response rate (or 11 forms). As the number of attendees grows, the response rate declines so that but the time you’re in the 500 plus category, you’re maxing out at 50%. But those are big attendance numbers typically reserved for signature events. Most CLE programs in Canada are in the 50 to 120 participants range. For that size audience, your goal should be a response rate of close to 70%.

Why? The better the response rate, the more weight you can give the results. And if you don’t have feedback from one third or more of your attendees, there’s a chance their responses could change the results. So how do you encourage more attendees to respond? A good starting point is to remove barriers to completing and returning the form. Here are some practical suggestions:

  • Make the form easy to identify and find (print it on distinctive paper)
  • Give respondents the option of keeping their comments anonymous or providing their name and contact information for follow-up
  • Make the form easily readable (font size matters) and if possible, visually appealing
  • Make the form easy to fill in (aim for a mix of open-ended and multi-response ie. “1 to 5” or “poor to excellent” rating-style questions)
  • Don’t waste attendees’ time with questions that don’t matter (“will you come back?” matters)
  • Allow enough time to complete the form (for example, in a one day program, hand it out when attendees check in)
  • Tell attendees why their feedback is important and what you will do with it
  • Remind attendees throughout the day to complete the form and for certain, remind them at the last break or before the last presentation
  • Have extra forms on hand for attendees who lose, leave behind, or take notes on their form
  • Provide pens for attendees who don’t have one
  • Provide a surface to write on
  • If there is no surface to write on, consider printing the form on card stock so it is easier to write on in an awkward position
  • Consider motivating attendees to return the form with some kind of extra push (it shouldn’t be the reason for completing the form, and it doesn’t always work but I have seen response rates soar when completed forms were tied to receiving an answer key or certificate)
  • Make it easy for attendees to turn in the form on their way out (provide a box at every exit or even better, have people standing there, asking for and collecting forms)
  • Add a fax number and/or mailing address to the bottom of the form for people who say they will “send it in” (they almost never do but still…)
  • The day after the program circulate the form (or an online version of it) by email and remind attendees who did not return it why you value their response
  • Thank attendees for completing the form

And one last gem. Get to know your attendees. Attendees who you know by name, or who know you or your team, will be more likely to turn in feedback forms than attendees with whom you have no relationship.

Form filling is not fun (even when an attendee really enjoyed the program). It’s important for providers to remember that, and dedicate a little bit of time and resources to increasing response rates.