Survey fatigue or over-surveying is something to recognize and guard against.
Students can feel burnt out from requests for their feedback about academic courses, campus events, and general college experiences — all on top of customer satisfaction surveys they receive from retailers.
Survey fatigue can lead to declining response rates, and data collection is only worthwhile when a substantial quantity and quality of data is collected. This needs to be accounted for whenever you coordinate assessment projects.
Before I say more, it is important to identify the most appropriate data collection method according to your needs. Although surveys are familiar to staff and somewhat easy to create, they shouldn’t always be your default assessment option. There are many data collection methods that should be considered in addition to surveys. First identify what you intend to measure, then determine the best way to measure it. (My favorite method is rubrics, but see page 25 of this assessment guide for more.)
When surveying is the most appropriate method, there are a number of considerations and strategies related to design, administration, use, and deliberate engagement of stakeholders that you can decide between. Those elements can make for good survey methodology, while also increasing your response rates!
INTENTIONAL DESIGN
1. Only ask necessary questions
.
Toss out nice-to-know questions or questions you can’t pinpoint a specific use for their data.
2. Use survey or skip-logic to ask population, behavior, or response-based questions
.
This way, students should only receive relevant questions and not have to select N/A if we know it wouldn’t apply to them. (For example, fourth-year students only get questions relevant to their experience vs. first-years.)
3. Determine what data you already have that doesn’t need to be collected by survey
.
As long as you collect a common identifier (like a student’s email or ID number), you could obtain data like demographics, class year, and major from student information systems.
4. Put demographic questions at the end
.
These sorts of questions are quick and easy to fill out – so not much effort to close out the instrument – but putting them early in the survey can delay getting to the “point” of the survey. The only exception here is if you need a demographic question for skip-logic purposes (such as capturing their class year to determine follow up questions).
5. Pilot your instrument with students to check the clarity of questions
.
You may think that your questions make sense, but since you’re not the targeted survey taker, be sure to get feedback from your intended audience and adjust accordingly.
6. Test your instrument and administration method for the functionality of links and logic
.
It’s always disappointing to check your survey results only to see a question was worded incorrectly, that you missed including a response option, or that the logic and links didn’t work. The good news is that, with testing, you can catch those omissions and errors before sending out your survey.
7. Strive to make your survey accessible via multiple devices
.
Realize students may access or look to respond to your survey from various devices — including laptops, smartphones, and tablets. So, it’s critical to make sure your survey adapts and works well across device experience.
(Live link here) Continue to read my other 34 tips to increase survey response rates, grouped in categories of Invitations, Engaging Students, Engaging Faculty & Staff, Using the Results, and Investing Resources.