Designing Effective Surveys
The Science
You can absolutely get useful information from an imperfectly constructed survey. But if you have the opportunity to create your own surveys, you have the opportunity to capture that thing we all crave when recommending instructional design interventions: high-quality data.
I am going to start with a book recommendation. Here is why: I approach my instructional design career the same way I approached my research fellowship many years ago. If I were to bring my design into a classroom of experts, and they had 60 minutes to ask me why I chose X over Y, I want to be able to say, “I researched the best way to do X, and here is what I found.”
The book that provides the underpinnings of my research methodology is Improving Survey Questions: Design and Evaluation by Floyd J. Fowler Jr. Fowler has a Ph.D in Social Psychology and has devoted a decent portion of his academic career to researching survey errors and how to prevent them. (If you get really excited and want to read more of his work, check out Survey Research Methods.)
Note: Improving Survey Questions: Design and Evaluation is from 1995 but I have yet to find a more recently published book that has the same academic rigor behind its recommendations.
Here are my top two highlights from the book.
Designing Good Questions
Chapter 3 “Questions to Measure Subjective States” provides a strong way to start your question-designing task. First, define your objectives. What are you trying to measure? If it’s a factual measurement (how many times you drank tea last week) head back to Chapter 2 (which is on gathering factual data).
In Instructional Design we are often seeking pain points in needs assessments (subjectively reported) and assessments of curriculum materials (also subjectively reported.)
So step one is defining what is to be rated. Is it overall satisfaction with a live learning session? The usability of a learning management system? Whatever the objective, that’s the north star for both the question and the answers to the question.
The appendix in the back of the book is invaluable here because it provides a great many options for measurement answers.
You will probably recognize this scale for Asking Evaluative Questions:
Excellent
Very good
Good
Fair
Poor
If we were trying to measure a participant’s overall satisfaction with a training, we could pose the question this way:
——-
1. Overall, how would you rate the live lesson today?
Excellent
Very good
Good
Fair
Poor
2. Please tell us why.
——-
We would absolutely have future questions about specific aspects of the training (accessibility, clarity, relevance) but up there we have a solid start for our survey questions: a well—defined objective (overall quality of the lesson) and a research-backed evaluative scale: Excellent to Poor. We have also included a write-in section where participants can share data that will help us understand details that might not be standardizable, but that will add vital context for why one intervention might be better than another one when we look to improve the lesson.