You Don’t Need to Be a Statistician: A Brief look at "Pre" and "Post" Measures to Provide Practical Data
Print this Article | Send to Colleague
Let’s say, for example, you want to know whether a program your area is offering is meeting expectations.
Often a feedback form is provided that asks for participant feedback. This might include questions such as:
1. What did you find most positive from this program?
2. What changes could make this program even better?
This is a fine start to gaining qualitative data. However, it is often wise to ask both quantitative and qualitative data. For quantitative data, a simple five-point Likert scale is often a good option. It is easy to develop and to fill out. It is generally considered reliable, and all the elements or scores have the same value which aids interpretation.
So, we often see questions on feedback forms today that include this scale by posing a statement and then asking you to choose either:
1. Strongly disagree
2. Disagree
3. Neither agree nor disagree
4. Agree
5. Strongly agree
What is often overlooked when evaluating a program or training is the use of pre and post measures. This is easily remedied. By using even one pre measure, much information is gained. For example,* let’s look at the following question:
"To what extent was this program useful from your perspective?"*
1 2 3 4 5
Low (Not Useful) Med High (Very useful)
*Note: please feel free to change the term "useful" in the following scale to one that works better for your efforts (e.g., successful, productive, etc).
Let’s say all 20 participants return their feedback forms, and the average rating is 4.0. You share this rating with your manager who asks you why the rating is so low.
You are surprised and say that you think a rating of 4 out of 5 is quite high. Your manager doesn’t agree, and feels that a 4.0 is not an adequate rating.
What do you do?
In fact, with the information you have, you actually cannot be sure what this 4.0 measure means. One option that may enable your argument that 4.0 is a good rating would be to add a pre-measure.
So, let’s say, instead, you provide the following two Likert scales (be sure to hand out the feedback form with this question before the start of the program):
A1. What are your expectations of this program’s usefulness as you begin the program?:
1 2 3 4 5
Low Med. High
A2. Why did you rate it this way?
Ask this to be completed at the end of the program:
B1. "Now, on leaving, to what extent was this program useful from your perspective?"
1 2 3 4 5
Low Med. High
B2. Why did you rate it this way?
Let’s say you again receive all 20 responses, and again the average rating on leaving was 4.0 with a range of 4.0-5.0. However, you now have an additional data set: the average rating at the start was 2.0 with a range of 1.0-3.0!
You would now accurately be able to say, "On average, the program met and exceeded participant expectations." You could also accurately confirm that given expectations, a rating of 4.0 on leaving was an increase from 2.0-4.0! Of course, your manager could ask why expectations were so low, but given you have asked the qualitative, "Why," you would have some data – some specific written comments -- to explain this, too. So, not only is 4.0 a significant increase from participant expectations, you can also report why it was so rated.
Consider using pre and post measures for other topics as well. If you are working to support a team in their development, for example, you can ask a series of pre-ratings, asking about their ability to collaborate, communicate, etc. Using mid-program measures for longer efforts can provide even greater clarity. By using pre, mid, and post measures, the data you gather can provide greater breadth and content to your evaluations.
Additional tips for developing survey or feedback questions:
1. Ask only one question per question. For example, "To what extent was this program useful and practical?" is actually two questions ("To what extent did you find this program useful?" AND "To what extent did you find this program practical?") Choose only one per question.
2. Follow your quantitative questions with the opportunity to write qualitative responses. (As noted above, you could ask "Why" after a quantitative question.
3. Keep it short.
4. Ask about the participant, so long as it does not relinquish anonymity when this is desirable. For example, "# of years you have worked at the company." This could provide further means for interpreting the data you receive.
5. Consider different uses of the data so that the questions you ask will provide the information you need.
6. Be inventive! Try some new questions to discover what will most engage and interest colleagues.
About the Author:
Jeannette Gerzon, EdD, SPHR., Organization Development Consultant in Human Resources at MIT can be reached at gerzon@mit.edu. |