Services on Demand
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
African Journal of Health Professions Education
On-line version ISSN 2078-5127
Afr. J. Health Prof. Educ. (Online) vol.15 n.1 Pretoria Mar. 2023
http://dx.doi.org/10.7196/AJHPE.2023.v15i1.1630
RESEARCH
Balancing responsibility-sharing in the simulated clinical skills setting: A strategy to remove barriers to feedback engagement as a new concept to promote a growth-enhancing process
R M Abraham
MB BS, PG Dip (Anaesthesia), PG Dip (Public Health), MMedSc, PhD; School of Clinical Medicine, Clinical and Professional Practice, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
ABSTRACT
BACKGROUND. When feedback is provided in a formative context, it must be used effectively by learners. Many barriers prevent medical students from meaningfully engaging with feedback in the clinical learning environment.
OBJECTIVE. To explore how medical students engage with feedback in preclinical skills training.
METHODS. Using an exploratory qualitative methodology, data from five focus groups, including 25 purposively selected third-year medical students, were iteratively analysed and identified and key themes were clarified.
RESULTS. The data revealed barriers that inhibit the use of feedback, ranging from students' difficulties with decoding feedback, to their unwillingness to expend effort. Thematic analysis revealed four major themes related to the barriers to feedback receptivity and utilisation.
CONCLUSION. Without collaboration, neither clinical educators nor students are empowered to fully remove the abovementioned barriers. Promoting a student's learning is often framed as predominantly the task of their clinical educators. With a move towards constructivism, competency-based medical education claims that effective learning requires students to complement and significantly share in their educator's responsibilities for their academic growth. Developing a responsibility-sharing culture in the giving and receiving of feedback ensures that students benefit fully from the feedback received through proactive engagement, leading to effective and sustainable clinical educator's feedback practices. With minimal discussion on the concept of responsibility-sharing in the context of assessment feedback in medical education, it is necessary to further analyse and discuss this critical issue by considering certain expectations that should reinforce such a culture, along with the practicalities of creating this cultural shift within the preclinical skills setting.
Feedback is widely recognised as an essential element of medical education. Data from medical student surveys show gaps in satisfaction with feedback, suggesting a weak link in the process.[1] The consumer model of education, which implies that learners are passive recipients with no responsibility to make feedback effective, explains their poor satisfaction with feedback.[2] The importance of stressing the learner's active engagement in the feedback process cannot be overstated.[3] This asks for an approach that includes a two-way dialogue to replace the linear transmission view of feedback, in which learners make sense of feedback information from various sources and use it to enhance the quality of their work or learning strategies.[2,4,5] This approach implies that, while feedback effectiveness is still dependent on the quality and timeliness of the information provided, it is also critically dependent on how the learner proactively receives, engages with and acts on this information, referred to as proactive recipience of feedback.[,3,6]
The role of medical learners in feedback engagement in the research literature is under-represented; hence, there is a 'blind spot' in our understanding of this issue in medical education.[7,8] Handley et al[9] cautioned against misinterpreting students' readiness to receive feedback, such as skimming feedback without taking additional effort to apply advice as evidence of strong engagement, as this is nothing more than lip service. The readiness to receive feedback is an important precursor to proactive recipience, but it is not the only one. Supporting learners' development of skills, such as self-appraisal, assessment of literacy, goal setting, self-
regulation and motivation, helps them to develop as proactive feedback recipients.'3- Higher-education literature paints a bleak picture of how students demonstrate proactive recipience, ranging from absent or poor engagement with feedback to failing to collect written feedback, or just skimming the written comments, with the initial reading representing the end of the engagement.[10,11] Orsmond et al.[12] conducted a qualitative study, revealing many student participants' significant insights into the benefits of engaging with feedback, stating, 'When reading feedback, it makes you realise what you could have done; rereading an essay with the feedback in mind helps you to see work in a different light'.
The educational theory and best practice continue to emphasise the importance of student-centred methods, arguing that students' success in higher education may be aided by their capacity and desire to share responsibility for their learning.[6] Shuell[13] stated, 'It is good to recall that what the student does is really more essential in deciding what is learnt than what the instructor does'. Students will therefore need to take responsibility and autonomy within the learning process. Sharing responsibility in the feedback process between educators and learners increases student intrinsic motivation, while also emphasising their engagement with feedback and the educators' continuous feedback practice.[2,14] As not all medical students recognise the necessity to proactively engage with feedback, we must consider the barriers that limit effective engagement with feedback. We also discuss suggestions of responsibility-sharing within the specific context of receiving assessment feedback towards resolving challenges to engagement as a concept to promote a growth-enhancing feedback process.
Methods
Context and setting
The study was conducted at the clinical skills laboratory at Nelson R Mandela School of Medicine, University of KwaZulu-Natal, Durban, South Africa. The school follows a problem-based curriculum, reflecting integration of the biomedical sciences with the clinical disciplines. At the beginning of the academic year, preclinical students are provided with task-specific learning outcomes. Each theme runs for 6 weeks, covering skills related to a specific body system. At the end of a theme, students are expected to demonstrate competence in conducting examination skills using standardised patients. The purpose of the clinical skills assessment is to formatively assess students' competence in performing a skill and to provide structured feedback in a logbook. This feedback is based on directly observed performance of multiple clinical tasks by multiple supervising tutors and peers during the academic year. The longitudinal integrated clerkship paradigm and clinical skills logbook formative assessment are repeated throughout the second and third preclinical years.[15]
Study population
An exploratory qualitative methodology with a purposive sample was used. All 239 third-year medical students who had at least 1 year of experience with feedback regarding clinical skills logbook formative assessment, and were classified as high performers (>70%), average performers (50 - 69%) or low performers (<50%) based on their end-of-semester summative clinical skills performance, were invited to participate in focus group interviews. Five focus group discussions were held, with each group consisting of 5 students (n=25), based on consent and availability of the students. There are no specific sample size guidelines for qualitative research studies, as the aim is to maximise the possibility of extrapolations from the study rather than generalisations.
Ethical approval for the study was granted by the University of KwaZulu-Natal Human and Social Sciences ethics committee (ref. no. HSS/2213/017D).
Data collection
The focus group interviews lasted ~60 minutes and were led by the researcher and a moderator. The researcher was a lecturer responsible for teaching third-year medical students. To minimise the impact of a power relationship - perceived or real - on students' responses, the moderator guaranteed that the conversation was unbiased and that the participants' experiences were influenced by their viewpoint rather than research bias. The moderator was also a lecturer, but was not involved in the research project. The interview schedules were semi-structured, with open-ended questions based on the literature to ensure construct validity. The interview elicited the student cohort's impressions of their engagement with and utilisation of clinical skills feedback, as well as the settings that facilitated helpful feedback. The semi-structured schedule guided questioning during the focus group interviews. Follow-up questions were asked for clarification purposes. The responses were investigated further to ensure that the discussions addressed the research topics with this specific study in mind, ensuring the study's validity by measuring what it claimed to measure. Discussions lasted until no new content was to be found. Personal information of the study participants was kept confidential by the researcher, and the findings were reported anonymously by substituting codes for participant identifiers. To ensure the appropriate safeguards for security, anonymity and confidentiality of the collected data, focus group data were stored using a Cloud-based storage option on Google Drive during the research.
Data analysis
The audiotaped focus group discussions were anonymised, cleaned and transcribed verbatim by a professional transcriber. The researcher qualitatively analysed the data using the continuous systematic text condensation, a method of content and thematic analysis.[16,17] The researcher read the text material several times to become familiar with the data and obtain an overall impression. The researcher then methodically examined the data, concentrating on participants' general views of receiving feedback, using inductive coding to uncover patterns in the data. Using keywords and text chunks, several characteristics of the feedback processes linked to learner behaviour towards feedback reception that emerged from the data were identified and coded. Each of the coded groups' contents was simplified and summarised. The key themes were found and extracted from the data by generalising descriptions and concepts that arose with regard to receiving feedback, and challenges faced with using feedback. Finally, the important themes from all the responses were organised into distinct categories. The researcher used a codebook in the form of an Excel spreadsheet (Microsoft, USA) to monitor and validate the codes throughout the data analysis process. To enhance the validity and trustworthiness of the data, the researcher coded the data and had it checked by a colleague who was familiar with the clinical skills feedback culture.
Results
Twenty-five demographically diverse and mixed-gender students (15 female (60%) and 10 male (40%)) participated. There were two groups of 5 each, with >70% (F1 and F5) and 50 - 69% (F3 and F4) and one group of 5 students with <50% (F2) end-of-year OSCE (objective structured clinical examination) marks. F1 and F5 comprised the higher-performance category, while F2 - F4 was combined to make up the lower-performance category.
Challenges with receiving and using feedback
In their discussions, participants consistently described facilitators and barriers to understanding and implementing feedback. A thematic analysis of the barriers revealed four main themes with feedback recipience. The themes, together with supporting quotations from the participants of the five focus groups (F1 - F5), are described below.
Understanding
One of the reasons students failed to engage with feedback was that they did not understand the feedback message or did not know what the message was for. One of the participants mentioned:
'I never thought feedback was such an important topic to warrant so much discussion.' [F3]
There is the possibility that students may not have realised that they have received feedback. There could be a misalignment in the students' and educators' understanding of the definition and purpose of feedback. A student illustrated this point, stating:
'Sometimes with the unclear feedback it is sometimes a lack of understanding ... and this may hold you up from using it.' [F5]
Awareness
Students' lack of knowledge of opportunities available to them to effectively use and implement feedback could be another reason why they fail to engage with feedback. Sometimes students might know that a particular skill needs improvement, but do not know what to do, or how to confirm that their efforts have been successful. A student illustrated this point, stating:
'I mean generally it would be useful to refer to the action plan. Though I don't know if it would be useful, especially if I know my problem but need more assistance with developing my skill.' [F3]
Furthermore, there are not any opportunities for follow-up on feedback unless one fails the logbook skill, as stated by one student:
'But obviously the logbooks, they come first and then there is OSCE so I would make sure that I study hard and I go deep with my notes before the OSCE, because you can never go back to the logbook session to know if I have improved, unless you have failed it.' [F1]
Students may lack knowledge of strategies that they can take to act on feedback. We cannot therefore assume that students will know what to do with feedback.
Agency
If students feel inadequately equipped to implement feedback or feel that their efforts would be futile even if they try, they may find it difficult to deal with feedback. Students perceived that their prior attempts to respond to feedback failed to see enhancements in their performance over time and therefore might give up. A student mentioned a lack of motivation to use feedback, as he would never receive a better rating with a particular doctor even if he put in some effort to make changes to his performance - so is likely to ignore feedback:
'With some doctors they will never give a superior performance. It's like just writing for the sake of writing it, not actually feedback. It's something like that.' [F3]
The lack of agency can also arise because students believe that they are being advised to implement feedback based on isolated skills, which are not seen as relevant to their future as a medical doctor. A student mentioned that the assessment process should be modified to develop an integrated approach with feedback as a tool to solve clinical problems:
'We are only concerned with how we assess the [jugular venous pulse] JVP systematically during the logbook session, which includes everything related to confirming it, but it is not everything that you know that will apply when you go to the clinic. As a result, due to extra concerns the patient may have, you may have to remove some aspects of the examination. As a result, we need input that combines the normal and abnormal in a meaningful way' [F2]
Another common cause of limited agency arises from the common modular structure of the curriculum, where students find it difficult to see transferability of advice from one modular assessment to the next. A study participant stated:
'Because the OSCEs are deemed separate from the [end-of-theme tests] ETTs, and the logbooks come just before the ETTs, we're much more focused on the ETT. As a result, we don't place as much weight on logbook feedback once a theme is done. We may review the themes four months later, close to the OSCE.' [F4]
Motivation
Students may just lack the enthusiasm to engage with feedback for reasons such as time constraints or being unprepared to invest time immediately. In our study, these were not the priorities of the study participants, as they saw the final clinical examinations (OSCEs) as the driving force to act on feedback closer to the examinations as a self-regulatory situational focus, stating: 'So, if you're telling I need to work on this and that, I already know I will before the OSCE with my peers. So, I'm going to disregard the criticism for the time being, and the next time I encounter the same skill in the OSCE, I'll know what to do. I prefer to work hard before the OSCE.' [F4]
Many participants perceive lack of intrinsic motivation and do the minimum needed to attain a particular grade to pass the OSCE, which they blamed on the unequal weightage of courses in the curriculum:
'I believe it's a medical school thing where certain things are more significant than others. So, obviously, you'll spend a lot more time studying anatomy, but if you had a test on clinical skills or an examination every week, for example, it would drive you every week to know, like, I need to get my skills done.' [F5]
Discussion
Removing barriers to feedback engagement
An important question one may ask after identifying the abovementioned barriers to feedback engagement and implementation is whose responsibility it is to remove these barriers. Students believe that it is the educator's responsibility, whereas the educators believe it is the student's responsibility.[18] The diverse barriers highlighted in this study frequently obstruct students' ability to engage proactively with feedback, slowing their clinical skills development. The mutual blame game between students and educators for the failings of feedback further prevents breaking down these barriers to make a difference. To resolve this deadlock, we need to think more concretely about responsibility-sharing and where the different responsibilities lie.
Firstly, clinical educators and medical students have essential roles to play.
Secondly, despite both having a role to play, the responsibilities of educators and students for resolving each of the broad barriers, i.e. understanding, awareness, agency and motivation, cannot be equal, as resolving certain barriers demands greater responsibility from medical students, whereas resolving others demands greater responsibility from clinical educators.
Thirdly, with the barriers in the following order, from understanding to motivation, resolving these barriers signifies decreasing levels of responsibility of educators and increasing levels of responsibility of students - in that sequence. Educators often identify students' lack of motivation to engage as a critical barrier.[19] Although educators can take steps to constructively encourage students to be motivated, students have the greater power to keep themselves motivated and must be primarily responsible to resolve the barrier of motivation by being willing to put in the effort required to implement the feedback.[4] Similarly, students tend to be more critical regarding issues related to their understanding and awareness of the feedback, pointing out that feedback is insufficiently detailed and hence can be difficult to understand. Although medical students should take steps to enhance their understanding of feedback information, it is the primary responsibility of the clinical educator to ensure that the feedback they give to students is actionable and clear. Feedback that is unclear or unrelated to future assessments makes it difficult for students to reflect on and set action points. Reflection in turn leads to the use of action points, confirming that feedback needs to be linked to future assessments.[20]
Fourthly, with the barriers arranged in the sequence from understanding to motivation, there is the need for resolving the barriers systematically. For instance, it is impossible to deal with students' poor motivation to engage with feedback if they believe the feedback is not relevant to their future profession or the next module; hence, implementing feedback (agency) is seen as being pointless. Although we agree that students' motivation is the most crucial ingredient to proactive engagement, we cannot assume that we can foster motivation in students who do not understand (understanding) the feedback information or do not know what to do with it (awareness). The motivation theory tells us that if someone thinks they cannot accomplish a task, motivation to engage in it fails, which is referred to as learned helplessness.[21] Medical educators have the greatest power to motivate proactive recipience in their students, despite the overall balance of responsibilities between educators and students.
Fifthly, increasing students' motivation to engage with feedback initiates a virtuous cycle, making it easier for both educators and students to further sort out the other barriers. Increased motivation steers students to devote more time in analysing feedback, seeking feedback and taking up offers for further dialogue around feedback.
The abovementioned expectations regarding responsibility-sharing between clinical educators and medical students necessitate an understanding of the various ways in which students and educators can collaborate to remove barriers to engaging with assessment feedback. As an example, when a medical student performs an examination skill, the clinical educator's responsibility is to ensure the feedback provided on the skill is clear, balanced and specifically related to the learning objectives to overcome the barrier of lack of understanding. The medical student has the responsibility to seek clarifications over the meaning of the feedback they receive.[221 To overcome the barrier of awareness of strategies to implement feedback, educators should incorporate into the curriculum activities to train students in feedback implementation, avoiding assumptions about students' knowledge of strategies for acting on feedback. Developing students' assessment and feedback literacy skills through initiating reflection, self-assessment and peer assessment as part of the feedback process, enhances self-regulation and incorporation of feedback.[6] Nonetheless, students should take responsibility to decide which strategies they could use to implement feedback, try new strategies and decide when to look for support. To overcome issues of agency, educators need to develop innovative approaches, and assess multiple integrated skills so that feedback is relevant to the student's future practice as a doctor. Medical educators should not make feedback comment too specific to one examination skill in a way that limits its transfer. Feedback comments should be linked to the programme-level outcomes rather than only the module-level learning outcomes. Students must recognise their responsibility of working hard to transfer the feedback from one context to another by drawing out common themes across the assessments. Finally, to overcome issues of motivation, educators need to provide repeated opportunities for dialogue by employing sustainable feedback practices, structuring the feedback process in a motivating way so that students feel that improvement is achievable. Students must be committed to change on receiving feedback, as well as being willing to engage with the emotions that may arise with receiving feedback. Achieving a culture of responsibility-sharing needs co-operation from both students and clinical teachers. Medical students need to realise that proactive engagement with feedback is more than an academic talent of analysing skills performance. It is a transferable, sustainable and lifetime skill that eventually helps them with their post-university careers. According to McGrath et al.,[23] long-term learning gains viewed as distant goals for employment go beyond the student's immediate satisfaction with responsibility-sharing. The need for faculty development resources to improve feedback provision by training tutors and peers on how to give constructive, balanced and actionable feedback is critical for long-term feedback practice, as inappropriate feedback is frequently the result of a lack of understanding of the practice. Nicol[2] mentioned that the educator's workload is a major factor in their feedback practice and developing a responsibility-sharing culture can be labour intensive. However, clinical educators' short-term investment in activities to overcome barriers to feedback engagement has the potential to sustain their feedback-related burden in the long run. By encouraging students to be proactive in applying feedback, as well as creating and seeking feedback, the educator's overwhelming responsibility of providing increasingly more feedback may be reduced. Achieving these distant goals of proactive recipience is feasible, as it is consistent with the institutional aims of encouraging a supportive change towards responsibility-sharing.
Conclusion
Clinical educators high-quality feedback is futile unless medical students are prepared to accept and use it; therefore, their engagement in the feedback process is critical. As this article highlights numerous barriers that prevent students from actively engaging with feedback, the responsibility-sharing culture discussed above assumes that medical students and clinical educators have equal and distinct roles to play in overcoming these barriers, with an inherent degree of interdependence in which neither educators nor students can accomplish their roles without the other doing the same. With the necessity of a cultural shift toward responsibility-sharing in the context of feedback in medical education, a recommendation for further research would be to establish tutors and students views about the factors they believed influenced responsibility-sharing within a multicultural diverse setting.
Declaration. This study is a reflection on RMAs PhD degree at the Nelson R Mandela School of Medicine, University of KwaZulu-Natal.
Acknowledgements. None.
Author contributions. Sole author.
Funding. This research was funded by the University Capacity Development Programme. The funding body was not involved in the study design, data collection, analysis, interpretation or writing of this manuscript. The views expressed in this report are those of the author and do not necessarily reflect those of the University Capacity Development Programme.
Conflicts of interest. None.
References
1. Weinstein D. Feedback in clinical education: Untying the Gordian Knot. Acad Med 2015;90(5):559-561. https://doi.org/10.1097/ACM.0000000000000559 [ Links ]
2. Nicol D. From monologue to dialogue: Improving written feedback processes in mass higher education. Assess Eval High Educ 2010;35:501-517. https://doi.org/10.1080/02602931003786559 [ Links ]
3. Abraham RM. Reflection on improving feedback skills and a framework for moving towards feed forward. Bangladesh J Med Sci 2022;21(1):206-212. https://doi.org/10.3329/bjms.v21i1.56351 [ Links ]
4. Carless D. Excellence in University Assessment: Learning from Award-winning Practice. Abingdon, UK: Routledge, 2015. [ Links ]
5. Carless D. Differing perceptions in the feedback process. Stud High Educ 2006;31:219-233. https://doi.org/10.1080/03075070600572132 [ Links ]
6. Winstone NE, Nash RA, Parker M, Rowntree J. Supporting learners' agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educ Psychol 2017;52(1):17-37. https://doi.org/10.1080/00461520.2016.1207538 [ Links ]
7. Harrison CJ, Könings KD, Schuwirth L, Wass V, van der Vleuten C. Barriers to the uptake and use of feedback in the context of summative assessment. Adv Health Sci Educ Theory Pract 2015;20(1):229-245. https://doi.org/10.1007/s10459-014-9524-6 [ Links ]
8. Price M, Handley K, Millar J. Feedback: Focusing attention on engagement. Stud High Educ 2011;36(8):879-896. https://doi.org/10.1080/03075079.2010.483513 [ Links ]
9. Handley K, Price M, Millar J. Beyond 'doing time': Investigating the concept of student engagement with feedback. Oxford Rev Educ 2011;37(4):543-560. https://doi.org/10.1080/03054985.2011.604951 [ Links ]
10. Scott SV. Practising what we preach: Towards a student-centred definition of feedback. Teach High Educ 2014;19(1):49-57. https://doi.org/10.1080/13562517.2013.827639 [ Links ]
11. Robinson S, Pope D, Holyoak L. Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assess Eval High Educ 2013;38(3):260-272. https://doi.org/10.1080/02602938.201L6.29291 [ Links ]
12. Orsmond P, Merry S, Reiling K. Biology students' utilisation of tutors' formative feedback: A qualitative interview study. Assess Eval High Educ 2005;30(4):369-386. https://doi.org/10.1080/02602930500099177 [ Links ]
13. Shuell TJ. Cognitive conceptions of learning. Rev Educ Res 1986;56(4):411-436. https://doi.org/10.3102/00346543056004411 [ Links ]
14. Deeley S, Bovill C. Staff-student partnership in assessment: Enhancing assessment literacy through democratic practices. Assess Eval High Educ 2017;42(3):625-644. https://doi.org/10.1080/02602938.2015.1126551 [ Links ]
15. Bates J, Konkin J, Suddards C, Dobson S, Pratt D. Student perceptions of assessment and feedback in longitudinal integrated clerkships. Med Educ 2013;47(4):362-374. https://doi.org/10.1111/medu.12087 [ Links ]
16. Malterud K. Systematic text condensation: A strategy for qualitative analysis. Scand J Public Health 2012;40(8):795-805. https://doi.org/10.1177/1403494812465030 [ Links ]
17. Patton M. Qualitative Research and Evaluation Methods: Integrating Theory and Practice. 3rd ed. Newbury Park, CA: Sage, 2002. [ Links ]
18. Stone D, Heen S. Thanks for the Feedback: The Science and Art of Receiving Feedback Well. London: Penguin, 2014. [ Links ]
19. Hattie J, Timperley H. The power of feedback. Rev Educ Res 2007;77(1):81-112. https://doi.org/10.3102/003465430298487 [ Links ]
20. Hounsell D. Towards more sustainable feedback to students. In: Boud D, Falchikov N. Rethinking Assessment in Higher Education. Learning for the Longer Term. London: Routledge, 2007:101-113. [ Links ]
21. Peterson C, Maier S, Seligman M. Learned Helplessness: A Theory for the Age of Personal Control. New York: Oxford University Press, 1993. [ Links ]
22. O'Donovan B, Rust C, Price M. A scholarly approach to solving the feedback dilemma in practice. Assess Eval High Educ 2016;41(6):938-949. https://doi.org/10.1080/02602938.2015.1052774 [ Links ]
23. McGrath CH, Guerin B, Harte E, Frearson M, Manville C. Learning Gain in Higher Education. Santa Monica, CA: RAND Corp., 2015. [ Links ]
Correspondence:
R M Abraham
abrahamr@ukzn.ac.za
Accepted 5 October 2022