We’re surrounded by feedback mechanisms every day of our lives. If you’re like me and are hooked to your Fitbit, you get to see how many steps you took on a certain day, or how you slept the previous night, and if you hit your exercise goal for the week. If you’re learning a new language using an app such as Duolingo, you get both gamified positive feedback (for example, badges for completing a “streak”) and negative feedback (for example, losing a “heart” every time you answer a question incorrectly).
One of the most common associations with feedback, however, is the feedback you receive during annual performance reviews and more frequent manager check-ins where you hopefully get a sense of how you’ve been performing on the job—what you’ve been doing well, where you need to improve, and what you want to achieve. All of these avenues for feedback have something in common—the feedback you receive is meant to help you learn, grow, or improve. This is also the case when it comes to online learning, be it in the form of the actual content, formative and summative assessments, or even when designing interactions.
What are some of the best practices for writing effective feedback?
1. Connect feedback back to learning goals:
Connecting feedback back to the learning goals of a course or a module helps reinforce the relevance of the content for the learners and acts as a scaffold for learners to achieve the learning goals one assessment question or one interaction at a time.
2. Provide constructive feedback:
It isn’t enough to inform learners that they’ve answered a question incorrectly. They need to know why the answer they’ve provided or selected is incorrect. Providing meaningful, constructive feedback is a great way to address any common misconceptions learners may have, clarify difficult content, address guesswork, provide suggestions for improvement, and stimulate reflection.
3. Do not reserve feedback only for incorrect answer choices:
All too often, feedback is only presented when learners provide or select an incorrect answer choice; thereby losing out on an important learning opportunity for formative guidance. Providing additional context or explanatory text around why an answer is correct can help reinforce concepts, strengthen recall from memory, and fortify learning.
4. Time your feedback properly:
Consider if learners should be provided with immediate feedback or delayed feedback. This could depend on the complexity of the content or even the incoming skill level of the learners. For example, if the course content is highly technical or complex, immediate feedback could be beneficial in helping learners check their understanding often and build confidence as they progress to more complex topics. Delayed feedback may be used in branching scenarios that mimic real-world situations. In such situations, one is rarely corrected immediately after making a decision and in fact, the actual consequences only surface after a string of decisions have been made. Delayed feedback may also be used to encourage learners to reflect and self-correct provided the design of the assessment allows them to do so. The delay need not be long.
5. Add variation in the remediation:
How often have you seen feedback that begins with generic text such as “That’s correct” or “Incorrect”? Adding variations in remediation helps humanize the feedback by making it more conversational and interesting, and less robotic. Additionally, encouraging or motivating learners through these variations can help improve engagement as they go through the assessments. For example, for incorrect responses, consider: “Actually,” “Not quite,” “That’s not quite right”, “You almost got it!”, etc. Feedback for correct responses can include: “Excellent!”, “Absolutely!”, “You got that right!”, “That’s impressive”, “Awesome,” etc. What are some other variations you can think of?
6. Provide links to further references as needed:
Learning almost always goes outside and beyond the online course and as such, additional relevant references may help build motivation and extend the learning experience. These references, however, should only be provided where it makes instructional sense to do so lest learners get distracted from the core content presented within the course. Another use case for this best practice is to call out the relevant content from within the course itself when a learner answers a question correctly or incorrectly. This stimulates recall if the learner has already reviewed the content and helps clarify their understanding at the same time.
7. Don’t let gamification elements detract from the learning:
While gamification elements such as scores, badges, leaderboards, and collectibles can be fun and engaging, don’t let these detract from the true essence of assessments—helping learners master key concepts or skills and meet their learning goals. If learners are more focused on scoring points in an assessment, they may pay less attention to any explanatory feedback. Learners may also be more tempted to game the system and score points, rather than engage with the course with an actual intention to learn. On the other hand, if learners do not score well, they may feel demotivated about the course in general and may not even complete it.
8. For a behavioral change, show real-world consequences:
If assessment questions mimic real-world situations that learners may face on the job, the feedback should also present real-world consequences of the learners’ choices or decisions. This is especially applicable to simulations and branching scenarios where learners can experiment and make mistakes in a safe environment and therefore be better prepared if similar situations do arise on the job. Showing real-world implications of their choices may also help learners internalize what to do or not to do in different situations, thereby encouraging behavioral change.
In addition to these best practices, consider ways to empower and encourage learners to provide feedback. This could be done through end-of-course surveys, adding space within the course itself for learners to ask questions and provide comments, conducting user testing sessions for potentially problematic areas of the course, and/or extracting performance data on assessments to gain detailed insights into how learners fared in the course. For instance, Raptivity, an interaction-building tool, has been designed to ensure instructional designers and trainers can provide the required feedback to good depth in all its quiz and game interactions.
All these methods can help inform and improve multiple facets of course design to create a much more powerful learning experience.
This article is originally published on LinkedIn – https://www.linkedin.com/pulse/providing-good-feedback-how-do-right-sandeep-kulkarni/