Services on Demand
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
African Evaluation Journal
On-line version ISSN 2306-5133
Print version ISSN 2310-4988
AEJ vol.12 n.1 Cape Town 2024
http://dx.doi.org/10.4102/aej.v12i1.690
ORIGINAL RESEARCH
Making conventional data collection more Child-friendly: Questionnaires with young students
Andrea MariI, II, III
IVoluntary Service Overseas (VSO) Hove, United Kingdom
IIInstitute of Development Studies (IDS), University of Sussex, Hove, United Kingdom
IIIMinority Rights Group International (MRG), London, United Kingdom
ABSTRACT
BACKGROUND: Despite the widespread recognition among authors and international agencies of the significance, essential nature, and rightful entitlement of children to directly voice their thoughts, research with young students is not as frequently undertaken as one might expect, particularly in the context of evaluating teachers' performance. Interestingly, though, when researchers and evaluators have engaged young students in data collection, they have overlooked the use of questionnaires and rather favoured more qualitative and participatory data collection tools or closed-questions surveys.
OBJECTIVES: In order to fill this gap, this article intends to make a case for a wider use of mixed quantitative and qualitative questionnaires with young students as a reliable tool to monitor teachers' performance more systematically.
METHOD: In particular, the article illustrates how to design and administer questionnaires to primary school students using a framework developed with contributions from four main sources: Gendall's revisitation of Labaw's theory of questionnaire design, the question answer process, Piaget's theory of cognitive development stages, and lessons learnt from a questionnaire designed and administered by the author among Tanzanian primary school students.
RESULTS: This approach not only ensures that students respond thoughtfully and reach consensus through debating, but also provides deeper insights into the specific cognitive and emotional criteria valued by students.
CONCLUSION: The article shows that the employment of questionnaires with young students is likely to yield valid and reliable data when three conditions are met: (1) questions are tailored to the respondents' cognitive skills and cultural background; (2) questions cover content that is meaningful to the respondents; and (3) questionnaires are administered in settings in which respondents can freely interact with each other.
CONTRIBUTION: By establishing that the validity and reliability of data from questionnaires with young students hinge on considerations of cognitive skills, cultural background, meaningful content, and interactive administration, this article sets a foundation for enhancing the effectiveness of teacher evaluation methods in educational settings.
Keywords: questionnaires with children; question-answer process; questionnaire design; cross-cultural questionnaires; questionnaire administration modes; student survey; children's voice.
Introduction
While there is still uncertainty about using questionnaires with young children, the author contends that questionnaires which are tailored to respondents' specific skills, cultural background and life experience, and are administered in settings where the respondents can freely interact under supervision, can indeed be accurate data collection tools (Bell 2007; Borgers, De Leeuw & Hox 2000; Johnston 2008; Scott 1997). Consequently, this article aims to provide practical guidance for non-governmental organisation (NGO) practitioners on the design and administration of these questionnaires.
The background section explores the evolution of research involving children, tracing the changes from the Geneva Declaration on the Rights of the Child to the abandonment of the Quarantine approach, which ultimately recognised children as individual participants and consumers in the adult world. This paradigm shift, rooted in emerging medical evidence, drove social research towards direct data collection from children. The ongoing debate on the most effective collection methods persists, with a considerable focus on participatory and qualitative approaches, alongside investigations into quantitative 'scoring' questionnaires. Notably, there is a research gap pertaining to the effectiveness of mixed quantitative and qualitative questionnaires with young children (aged 6 years to 11 years) - a gap that this article addresses.
To this end, the article presents the theoretical framework employed to build the questionnaire illustrated in the case study. This framework is rooted in the belief that respondents' skills and cultural background should inform questionnaire design, with an emphasis on evaluators' initial understanding of respondents. It comprises 5 stages or tasks, which are: understanding children's skills and background, formulating objectives, designing questions, developing an administration strategy, and piloting and post-testing the questionnaire.
The case study details the methodology and structure of the questionnaire administered by the author in Tanzania in 2017. The questionnaire aimed to gather information from primary students regarding the effectiveness of new teaching methods introduced during the training programme. The article presents and discusses the four questions included in the questionnaire within the context of the framework's theoretical underpinnings. Additionally, it illustrates the hybrid strategy employed to distribute the questionnaire.
Finally, the article presents the findings from the questionnaire with the goal of evaluating the quality of data collected and the tool itself. Reflections on the two charts displaying response results are used to assess the quality of Question 1. Meanwhile, the assessment of the remaining three questions relies on classroom observations and reflection, guided by relevant theories.
Background information
It has been nearly a century since the League of the Nations adopted the Geneva Declaration on the Rights of the Child, formally acknowledging children's rights. However, it was not until 1989 that the UN General Assembly adopted the Convention on the Rights of the Child, a pivotal document that recognised children as social, economic, political, civil and cultural actors (UNICEF undated), with voices that must be heard (Scott 1997). Article 12 of the Convention, for instance, established every child's eligibility to express their opinions on all matters affecting them and to have their views given serious consideration.
During this same period, primarily influenced by consumerism, children began to be recognised in the area of business and marketing as 'customers' with the ability not only to influence parents but also equipped with their own purchasing power and decision-making capacity (Scott 1997). As the so-called quarantine approach, which regarded children as isolated from the adult world, came to an end, at least in the Global North, this newfound awareness also permeated social research, which began to acknowledge children as legitimate respondents (Scott 1997). In reality, research on children was already underway (Tarsilla 2022), but the imperative lays now in collecting information directly from the children themselves.
Discoveries in modern psychology and emerging medical evidence also contributed to shift social research towards direct data collection from children. Assumptions about children's intellectual limitations and susceptibility to suggestion were debunked, while the belief that children's opinions were not necessarily malleable became more widely accepted (Scott 1997). Furthermore, the practice of proxy reporting (where adults report on children, referred to as 'evaluation about children' by Tarsilla in 2022) was no longer seen as fully effective, at least not without hearing directly from the children (Bell 2007; Borgers et al. 2000; Scott 1997).
Despite the widespread recognition among authors and international agencies of the significance, essential nature and rightful entitlement of children to directly voice their thoughts, research with young students is not as commonplace as one might expect. According to Scott (1997), survey research often neglects children. This trend is also noticeable in education, especially in the evaluation of teachers' performance, as there has been limited research (with some exceptions, i.e., Aleamoni 1981, 1987, and 1999) devoted to developing tools for collecting data from young students about their teachers (Peterson & Wahl quist & Boine 2000).
Several reasons contributed to this situation, including the 'inertia of practice', which leads to the habitual inclusion of only adult respondents in most studies, even when the subject matter requires information from children. Additionally, the belief that adults possess greater knowledge, experience and authority as compared to children remains prevalent among practitioners (Backett & Alexander 1991). Practical considerations, such as the perceived challenges of conducting surveys with children due to cost and ethical concerns, also play a role.
Even when research is conducted with children, there is a tendency to favour tools other than questionnaires. For example, Bell (2007) suggested the use of the diary approach with younger students. Borgers et al. (2000) mentioned alternative methods such as observation techniques, interviewing parents, clinical interviews (where questions are adapted throughout the interviewing process), qualitative interviews involving 'playing' tasks and small focus groups ('round circle'). Projective techniques such as drawing are also recommended, although it is emphasised that artwork should supplement rather than replace verbal communication (Scott 1997). Photovoice is another very popular data collection tool with children (Wang & Burris 1997), which involves young participants selecting photographs and exploring the reasons, emotions and experiences that guided their choices (Abma et al. 2022). The River of Life exercise (Musson 2004) has also gained popularity among children, allowing them to visually represent their experiences, akin to an emotional journey, using the metaphor of a river, while providing explanations.
These tools are highly effective with children as they encourage playfulness, visual expression and creativity, and help overcome challenges like reading comprehension. However, they also have limitations. For example, they may not be suitable for use with large populations due to their time-consuming nature and costs. Moreover, they primarily aim to collect qualitative data, which may not align with decision-makers who rely predominantly on statistical analysis and quantification to inform their decisions.
In contrast, questionnaires can help overcome many of these limitations. They can efficiently collect data from large samples at a lower cost, and their quantitative data lend themselves to analysis using relevant software, reducing analysis time. While it is important to acknowledge the limitations of questionnaires for monitoring teachers' performance (i.e. see children's limitations with Likert scales in Mellor Moore 2014), there are compelling arguments in favour of quantitative 'rating' surveys with primary students, as they have demonstrated their ability to produce valid and reliable data regarding teachers' behaviour (Kyriakides 2005).
Theoretical framework of the questionnaire
The purpose of this framework is to establish an empirical model, rooted in academic research on questionnaires, which can be utilised by practitioners to develop and distribute questionnaires to primary school students (6 years to 11 years). The framework, used for the questionnaire discussed in the case study, comprises five stages (or tasks):
1. understanding the children respondents
2. formulating the questionnaire's objectives (not covered in this article)
3. developing the questions
4. designing the administration strategy
5. piloting and post-testing the questionnaire.
The first three stages draw inspiration from Gendall's re-evaluation (1998) of Labaw's theory of questionnaire design (1980). Stages one and three incorporate insights from other theories such as the question-answer process, Piaget's cognitive development stages (1929), and Triandis' analysis of subjective culture (1972) to guide the creation of questions tailored specifically to primary school students from diverse cultural backgrounds. Bowling's reflections on questionnaire administration modes (2005) illustrate stage four, while stage five benefits from Bell's analysis of cognitive interview techniques. Stage two is omitted here because no specific theory guided the development of questionnaire objectives in the case study (they were derived from the project's logframe). Still, Labaw (1980) has outlined how to determine objectives autonomously.
Understanding the children respondents
In line with Gendall (1998), the foundational principle of questionnaire design is that respondents should define the scope of questioning, including the types of questions, language and concepts used, and the administration method. Consequently, the questionnaire design process should commence by gaining an understanding of the respondents, exploring their knowledge, abilities and cultural backgrounds.
Knowledge and skills
The process of answering a question encompasses four main stages: question interpretation, memory retrieval, judgement formation, and response editing. Each stage demands specific knowledge and skills for accurate execution. Successful completion of these stages is called the 'optimising strategy' (Bell 2007; Johnston 2008; eds. Schwarz & Sudman 1996; Tourangeau 1984).
For instance, proper question interpretation requires a grasp of vocabulary, grammar and the ability to comprehend the discussed concepts. Memory retrieval relies on the respondent's capacity to recall previously learned information (verbal memory). After retrieval, the respondent must select, prioritise and synthesise information and decide how detailed their answer should be - a process that involves judgement, such as choosing between contrasting information. Finally, before responding, the respondent evaluates and potentially edits the response, considering concerns such as social desirability, self-presentation or peer pressure (Johnston 2008).
Failure at any of these stages leads to 'satisficing strategy', resulting in invalid answers. Satisficing responses may include shortcuts, response sets (e.g. selecting all the first options) and yeah-saying answers (e.g. agreeing with a statement irrespective of its content) (Borgers et al. 2000; Johnston 2008). Reasons for stage failures may also include respondents' lack of knowledge, skills or willingness to exert cognitive effort (termed 'cognitive miser' by Bell 2007), alongside cultural barriers, poorly designed questions, excessive complexity or ambiguity, inappropriate environments and other factors.
Piaget's theory of cognitive development stages, outlining the skills and understanding levels of different age groups, can guide the decision on when to employ questionnaires. According to this theory, questionnaires should not be used with children under 6 years due to limited language skills (Borgers et al. 2000). For instance, children under 6 years may struggle to distinguish literal meaning from implied meaning, as they cannot go beyond the literal interpretation of words (Scott 1997). However, questionnaires can be adapted for use with children aged 6 years and older, provided the tool is designed to align with their abilities (Bell 2007). From age 11 years, questionnaires require less adaptation, while children aged 16 years and above can typically respond to adult questionnaires (Borgers et al. 2000; Scott 1997). A more detailed illustration of the knowledge and skills possessed by children aged 6 years to 11 years is provided in sections 'Question design and Question content'.
Cultural background
Understanding respondents becomes especially critical in cross-cultural assessments because the same concept can be interpreted differently based on cultural backgrounds (Johnson et al. 1997). Surprisingly, despite extensive discussions on the challenges of cross-cultural knowledge transfer (e.g. ed. Steiner-Khamsi 2004), survey research often overlooks the role of culture and assumes the universality of meanings across cultures (Johnson et al. 1997).
This assumption was challenged by Triandis (1972), who differentiated between etic (universally understood) and emic (culture-specific) phenomena, with examples of the latter including concepts like pain and stress (Johnson et al. 1997). When emic constructs are treated as if they were etic, it leads to category fallacy, resulting in misunderstandings. The etic and emic dynamic influences all stages of the question-answer process. The meanings assigned by respondents to words and concepts are deeply rooted in culture, as is the process of retrieving information, which is tied to specific events, locations or individuals (episodic memory) (Tulving 1983). This also applies to selecting and prioritising information during the judgemental stage, particularly when choices involve contrasting data. Lastly, response editing is the most culturally influenced stage, shaped by factors such as social norms. For instance, in some cultures, certain topics may be considered off-limits for women to discuss, leading to significant response editing.
Developing the questions
Once the survey objectives are established, the next step is to formulate the questions. Among the numerous available methods, Johnston's approach of selecting a large pool of questions from validated questionnaires and evaluation manuals (e.g. Stuart, Croft & Akeampong 2009) can be helpful in aligning evaluators' priorities with respondent characteristics and interests. However, the selection of questions should be guided by the considerations discussed earlier and in the following sections.
Question design
Most literature recommends using clear, unambiguous, and straightforward words in questionnaires, regardless of the age of the respondents (Bell 2007; Belson 1981; Benson and Hocevar 1985). Words that should be avoided include unfamiliar or challenging terms, excessive information-carrying words in a single question, homophones (words that sound like something else), broad concepts (e.g. children, the government), complex terminology and vague quantifiers. Accurate question design and wording are even more crucial when dealing with young respondents (i.e. Holaday & Turner-Henson 1989). According to Piaget, children aged 6 years to 8 years possess limited language comprehension and verbal memory, necessary for understanding questions and recalling information (Borgers et al. 2000). They also tend to interpret questions very literally and may not look beyond the explicit wording. For example, when asked if they have been on a school trip, they may respond negatively if it was a class trip (Borgers et al. 2000). Hence, it is advisable for evaluators to familiarise themselves with the words and terms children use and incorporate them into the questions.
Children aged 9 years to 11 years have more developed language and reading skills. They can distinguish between different perspectives, categorise items, comprehend temporal relations, employ logical thinking and engage in deductive reasoning (Scott 1997). Nevertheless, they may still face challenges with question wording, particularly regarding negations and negatively phrased items (Marsh 1986). For instance, questions like 'Do you find it difficult to finish your homework?' may be hard for them to understand, and they may struggle to formulate responses like 'Yes, I find it difficult' (Bell 2007). In both age groups, it is essential to avoid ambiguous, complex formulations, hypothetical statements and questions that are double-barrelled, leading or loaded.
Question content
Question content should directly relate to the children's experiences or knowledge to prevent them from resorting to 'satisficing strategy' (Scott 1997). The relevance of content should be assessed before administering the questionnaire, as it can be challenging to discern it from the responses. Young children often respond to adult questioning, whether they know the answer or not, and may provide answers they believe are 'correct' rather than their genuine beliefs (referred to as 'suggestibility' by Borgers et al. 2000; Scott 1997).
As it can be challenging for children to think retrospectively without clear parameters, content should also be presented within clear timeframes, either in the present or in a defined recent past (Bell 2007). In this sense, the question 'how many times have you watched TV in the past 7 days?' may be clearer to a child than 'how often do you watch TV?' (Bell 2007).
Visual stimuli
The use of visual stimuli in questionnaires for young children is a subject of ongoing debate. While some consider pairing images with text in questionnaires confusing, others believe that visual elements can facilitate responses in young children (Reynolds & Johnson 2011). While much of the literature focuses on picture-based Likert scale questionnaires where images representing emotional states are used as response options, images can also be used to illustrate or clarify questions, as demonstrated in the case study. Research suggests that adding images alongside questions can make the content more concrete than verbal representation alone, aiding language challenges and enhancing attention span (Scott 1997).
However, considerations must be made when selecting images, as they can impact how respondents interpret questions. Images should be gender and ethnically neutral to ensure unbiased support when interpreting the questions. Additionally, it should be noted that interpretations of images, including emotional expressions, can vary across cultures. Therefore, images should be straightforward, simple and unequivocal (Reynolds & Johnson 2011), such as stick figures and silhouette style drawing.
Designing the administration strategy
After developing the questions, the next stage is to identify and design the questionnaire administration strategy. The choice of strategy is crucial, as it can influence response quality (Bowling 2005; De Leeuw & van der Zouwen 1988). Bowling (2005) discusses various modes of questionnaire administration, explaining how each mode can affect response quality. A combination of face-to-face interviews and self-administered questionnaires was used to deliver the case questionnaire.
Cognitive burden
The steps of the question-answer process (please refer to section 'Knowledge and skills') can place a significant cognitive burden on respondents, particularly in self-administered questionnaires, especially in terms of literacy. The presence of an interviewer in face-to-face mode can alleviate this burden by providing support for understanding questions. This includes offering reminders, suggesting synonyms, simplifying grammar structures, clarifying ambiguous questions and aiding in recalling events.
Item response rate
Low item response rates (which is the number of responses provided) can negatively impact data quality, affecting the precision of population estimates, introducing study bias and diminishing the generalisability of survey results (Bowling 2005). Low response rates may stem from respondents' unwillingness or a lack of motivation to participate, communication barriers and the absence of stimuli to prompt responses in self-administered questionnaires.
Socially desirable responses
Children aged 6 years and above are prone to socially desirable responses, where they answer questions based on socially acceptable norms rather than actual situations (Borgers et al. 2000). Interviews, which involve social interaction, are more likely than self-administered questionnaires to elicit socially desirable responses. Overcoming this issue in interviews can involve indirect questioning, anticipation of socially desirable questions, cross-referencing responses with known facts and interviewer training to minimise bias (Bowling 2005).
Post-testing the questionnaire
Before administering the questionnaire, it should be piloted with a small group of students and subsequently revised through post-testing. However, Bell (2007) cautions against relying solely on responses to assess questionnaire effectiveness, as question deficiencies may not be apparent in respondents' answers. Therefore, post-testing should incorporate cognitive interview techniques like the 'think aloud' method. This technique requires respondents to verbally explain the question's meaning before answering it, which is effective with young children, as they often vocalise their thoughts during tasks. Questions to pose to respondents during post-testing include asking for their interpretation of the question, identifying unclear words, probing information retrieval processes and understanding how they reached their conclusions (Bell 2007). Additional considerations during post-testing should include the time taken by respondents to complete the questionnaire and any instances of repeated or explained questions and their reasons (Boynton 2004).
Case study
The case study involves a questionnaire (see Appendix 1) used to evaluate a teacher training project implemented in Tanzania by Voluntary Service Overseas (VSO). The project, which aimed to enhance the pass rates of primary students in rural areas, involved training teachers in participatory teaching methods (PTMs). For the 2017 mid-term evaluation of the project, a combination of tools was utilised: a child-friendly questionnaire, adult world café workshops and classroom observations.
The questionnaire aimed to gauge whether students learned from the PTMs delivered by their teachers. It was administered to 286 students (including 152 girls) selected randomly from five schools (randomly chosen from a pool of 24 target schools). This sample was reasonably representative of the total population of 14 500 students and produced results with a margin of error of 5% and a confidence level of 90%. Respondents were organised into 64 groups, each consisting of 4-5 students, with an effort to maintain gender and grade level balance. Each group received one questionnaire and was required to provide a single response to each question. The questionnaire strategy was approved by VSO. Consent to administer the questionnaire was obtained from the head teachers, school committees and parents. To ensure anonymity, students' names were not collected. Importantly, no adults (except the two authors and the translator) had access to the filled questionnaires.
The questionnaire, translated into Swahili by a Tanzanian project team member, consisted of four questions focused on whether students learned from PTMs and wallcharts, as well as their attitudes and feelings towards teachers and the school. These questions were developed collaboratively by the two evaluators who also trained the teachers, along with input from some of the schoolteachers.
Question 1 required students to evaluate nine classroom activities (or methods) for learning effectiveness. Each activity was described by a brief text and an illustrative drawing. Seven methods were participatory: singing, large (5/6 students) and small (2/3 students) work groups, drama, manipulation of objects (or realia), games, and reading books in class. Two were non-participatory: chalk and talk and students working alone. Students were asked to indicate whether each method was helpful (with a 1/0 response option) and explain their choice in an open-ended format. Simple stick figures and black silhouette figures were used to minimise gender, ethnic and cultural biases.
Question 2 assessed the impact of wallcharts on students' learning. Students were asked to name their favourite wallcharts and explain why. A supporting drawing accompanied this question. Question 3 explored students' perceptions of their teachers' quality, utilising a 1/0 response format and an open-ended question, also complemented by a drawing. Finally, Question 4, an open-ended query, aimed to gain insights into what made students happy at school, encompassing their feelings towards environment, classmates and the learning experience.
The 'hybrid' questionnaire administration strategy combined self-administered questionnaires completed in groups with adult facilitators overseeing the workshops. Each group of students received a single questionnaire. The groups were supported by 1 local facilitator who provided assistance throughout the workshops. These facilitators established rapport with the students through prior visits to the schools. During the workshops, they discussed equal participation and open discussion, assured confidentiality by having students respond as a group and provided ongoing assistance. Importantly, teachers and other school staff were not permitted on the school premises during the workshops.
Ethical considerations
'I would like to confirm that the questionnaire was used in 2017 to evaluate a project implemented by Voluntary Service Overseas (VSO) Tanzania […], aimed to improve students' learning in primary schools in […] Tanzania. The questionnaire was designed and administered by 2 VSO staff in compliance with VSO's Safeguarding and Child Protection Policy. Informed consent was obtained from the Head Teachers of the participating schools. School Committees and parents had given prior consent to all training-related activities as long as they were approved by the Head Teachers. The students' sample was selected randomly by the evaluators using the class registers. Students' names were not collected to ensure anonymity. Also, students' participation was voluntary. The workshops were delivered following the 'two-adult' rule, with one evaluator and one local facilitator chosen by the evaluator attending all workshops and always supervising the students. The students were grouped on the day of the workshops. The answers given by each group were recorded by the students on response papers, without writing down their names. Head Teachers, teachers, and parents had no access to the response papers, nor did they attend the workshops.' VSO-Tanzania.
Results
The study yielded several noteworthy findings, which were presented in two formats. The percentages in Figure 1 show the proportion of groups of respondents who deemed each method helpful or not helpful for learning (Question 1, closed questions). The reasons listed in Table 1 are the benefits and disadvantages identified by the respondents in relation to the 'Singing' activity and arranged by the author under three categories (cognitive, emotional, or others) (Question 1, open-ended questions).
Figure 1 illustrates the percentage of student groups that found various teaching methods helpful or not helpful for learning. Notably, the high response rate of 100% demonstrates that all groups engaged with Question 1. Furthermore, it indicates that teachers were indeed implementing PTMs in the classrooms, although the frequency of usage remains unknown.
Among the PTMs, the students responded positively to seven out of nine methods, with a median of 92% of student groups finding them helpful for learning. Large work group and games received unanimous consensus (100%), closely followed by reading in class and realia/objects. In contrast, work alone was disapproved by 94% of the groups. Surprisingly, non-PTMs like chalk and talk were unanimously endorsed by the students, despite being discouraged during training. This indicates that the students provided candid responses without seeking approval from adults, highlighting the authenticity of their feedback.
To enhance the quantitative findings from Question 1, the qualitative responses provided by students were analysed and categorised into two themes: benefits and disadvantages of teaching methods. These responses were further disaggregated into three categories - cognitive, emotional and others - to streamline data processing and reporting. Table 1 presents a sample of the most common open responses regarding the singing method, along with the number of groups that highlighted either benefits or disadvantages in their responses.
This approach of combining closed- and open-ended questions not only ensured that groups responded thoughtfully to the closed questions and reached consensus through debating the pros and cons of each teaching method, but also provided deeper insights into the specific cognitive and emotional criteria valued by students. For example, some groups wrote that singing helped them develop bonds with classmates and show affection to each other. Others said that reading books in class did not help learning because the premises were too noisy. The insightful content of responses helped confirm that the question was comprehended and processed correctly.
Findings from the remaining three questions were more challenging to draw, though the questions provided valuable learning on question development. For example, while the inclusion of visual aids was generally effective, in Question 2 it did not yield the intended results and, in fact, confused respondents. This question aimed to inquire about the wallcharts that students preferred to see in their classrooms and why. Unfortunately, due to the inclusion of an inappropriate image featuring a couple labelled 'mum and dad', many respondents misunderstood educational wallcharts for family pictures. This resulted in a substantial reduction in the sample size, invalidating a significant portion of the Question 2 findings.
Question 3 was unintentionally formulated as a leading question, as it inherently carried a positive bias. The inclusion of the word 'good' in 'Do you think your teachers are good?' led students to provide exclusively positive responses, thus compromising data validity. Similarly, Question 4, aimed at understanding students' feelings about the school environment ('What makes you happy in school?'), did not yield as many detailed responses as expected. Possible reasons include the question's generality and lack of a defined timeframe. A more specific timeframe such as 'What made you happy in school this month?' might have elicited more meaningful responses.
The choice of a hybrid administration method, which combined questionnaires, grouped respondents and responses, and adult facilitators, effectively reduced bias. The effectiveness of this strategy was assessed against Bowling's categories of potential biases. This approach alleviated the cognitive burden associated with questionnaires by utilising local facilitators who provided ongoing support with language, concepts, grammar structures and more. Additionally, it achieved a remarkable 100% response rate. Feedback from facilitators indicated that the success of this approach was partly attributed to presenting the questionnaire as a group experience rather than a mere task. Students perceived it as a fun activity that allowed them to socialise and connect beyond the routine of the school. They appreciated opportunities for group discussions and felt that any response they provided was considered correct. The arrangement of desks into larger tables promoted debate, curiosity and information exchange, fostering shared commitment and mutual support within and among groups, which in turn contributed to the high response rate.
While grouping students facilitated social interaction, it also introduced the risk of socially desirable responses. To mitigate this risk, a safe and conducive environment was established during the workshops. However, it remains uncertain whether all groups adhered to the process of genuine debate or if other factors such as age, gender or cognitive abilities influenced group responses.
Suggestions from participants in a webinar attended by the author included making the decision-making process fairer and more transparent by requiring groups to vote on shared responses. Another valuable suggestion was to maintain a record of all student responses, even those on which the group did not reach a consensus, to ensure that all voices were heard.
Conclusion
While research involving children covers a wide range of social and economic topics, there has been limited focus on developing questionnaires designed to gather data from young students aged 6 years to 11 years regarding their teachers' performance. Even when young students are included in research, there is often a preference for participatory and qualitative methods over questionnaires or quantitative surveys with closed-ended questions.
This article demonstrated that questionnaires which combine open-ended and closed-ended questions can effectively collect high-quality information from young students about their teachers' performance when three key conditions are met.
The first condition is that the questions must be tailored to align with the cognitive and cultural background of the respondents. Consequently, evaluators need to acquire an understanding of the students' existing abilities and characteristics, including what they are capable of answering, how they communicate, and their areas of interest. Conducting post-testing of the questionnaire with a sample of respondents using the 'Think Aloud' interview technique can help evaluators determine whether the requirements for accessible questions have been met.
The second condition is that discussing content which students can relate to their daily life experience makes students more likely to engage with the questions. This showed in the case study, which covered activities and feelings the students experience in their school routine.
Lastly, transforming the administration of the questionnaire into a social experience, where respondents have the freedom to interact under adult supervision, can alleviate the cognitive burden on students and contribute to higher response rates. However, the risk of social desirability bias remains a concern. This bias can be mitigated by maintaining a record of all student responses within each group, including those for which the group did not reach a consensus.
Acknowledgements
Competing interests
The author has declared that no competing interest exists.
Author's contributions
A.M. is the sole author of this research article.
Funding information
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Data availability
Data sharing is not applicable to this article as no new data were created or analysed in this study.
Disclaimer
The views and opinions expressed in this article are those of the author and are the product of professional research. It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency, or that of the publisher. The author is responsible for this article's results, findings, and content.
References
Abma, T., Breed, M., Lips, S. & Schrijver, J., 2022, 'Whose voice is it really? Ethics of photovoice with children in health promotion', International Journal of Qualitative Methods 21(1), 16094069211072419. https://doi.org/10.1177/16094069211072419 [ Links ]
Aleamoni, L.M., 1981, 'Student ratings of instruction', in J. Millman (ed.), Handbook of teacher evaluation, pp. 110-145, Sage Publications, Beverly Hills, CA.
Aleamoni, L.M., 1987, 'Typical faculty concerns about student evaluation of teaching', New Directions for Teaching and Learning 1987(31), 25-31. https://doi.org/10.1002/tl.37219873105 [ Links ]
Aleamoni, L.M., 1999, 'Student rating myths versus research facts', Journal of Personnel Evaluation in Education 13(2), 153-166. https://doi.org/10.1023/A:1008168421283 [ Links ]
Backett, K. & Alexander, H., 1991, 'Talking to young children about health: Methods and findings', Health Education Journal 50(1), 34-38. https://doi.org/10.1177/001789699105000110 [ Links ]
Bell, A., 2007, 'Designing and testing questionnaires for children', Journal of Research in Nursing 12(5), 461-469. https://doi.org/10.1177/1744987107079616 [ Links ]
Belson, W.A., 1981, The design and understanding of survey questions, Gower, Aldershot.
Benson, J. & Hocevar, D., 1985, 'The impact of item phrasing on the validity of attitude scales for elementary school children', Journal of Educational Measurement 22(3), 231-240. https://doi.org/10.1111/j.1745-3984.1985.tb01061.x [ Links ]
Borgers, N., De Leeuw, E. & Hox, J., 2000, 'Children as respondents in survey research: Cognitive development and response quality', Bulletin de Methodologie Sociologique 66(1), 60-75. https://doi.org/10.1177/075910630006600106 [ Links ]
Bowling, A., 2005, 'Mode of questionnaire administration can have serious effects on data quality', Journal of Public Health 27(3), 281-291. https://doi.org/10.1093/pubmed/fdi031 [ Links ]
Boynton, P.M., 2004, 'Administering, analysing, and reporting your questionnaire', BMJ 328(7452), 1372-1375. https://doi.org/10.1136/bmj.328.7452.1372 [ Links ]
De Leeuw, E. & Van Der Zouwen, J., 1988, 'Data quality in telephone and face-to-face surveys: A comparative meta-analysis', in R.M. Groves, P.P. Biemer & L.E. Lyberg (eds.), Telephone survey methodology, pp. 283-299, John Wiley and Sons, New York, NY.
Gendall, P., 1998, 'A framework for questionnaire design: Labaw revisited', Marketing Bulletin 9, 28-39. [ Links ]
Holaday, B. & Turner-Henson, A., 1989, 'Response effects in surveys with school-age children', Nursing Research 38(4), 248-250. https://doi.org/10.1097/00006199-198907000-00019 [ Links ]
Johnson, T., O'Rourke, D., Chavez, N., Sudman, S., Warnecke, R., Lacey, L. et al., 1997, 'Social cognition and responses to survey questions among culturally diverse populations', in L. Lyberg, P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, et al. (eds.), Survey measurement and process quality, pp. 87-113, John Wiley & Sons Inc., New York, NY.
Johnston, J., 2008, 'Methods, tools and instruments for use with children', Young Lives Technical Note No. 11, Young Lives, Oxford.
Kyriakides, L., 2005, 'Drawing from teacher effectives research and research into teacher interpersonal behaviour to establish a teacher evaluation system: A study on the use of student ratings to evaluate teacher behaviour', The Journal of Classroom Interaction 40(2), 44-66. [ Links ]
Labaw, P.J., 1980, Advanced questionnaire design, Abt Books, Cambridge, MA.
Marsh, H.W., 1986, 'Negative item bias in rating scales for preadolescent children. A cognitive developmental phenomenon', Developmental Psychology 22(1), 37-49. https://doi.org/10.1037/0012-1649.22.1.37 [ Links ]
Mellor, D. & Moore, K.A., 2014, 'The use of Likert scales with children', Journal of Paediatric Psychology 39(3), 369-379. https://doi.org/10.1093/jpepsy/jst079 [ Links ]
Musson, G., 2004, 'Life histories', in C. Cassell & G. Symon (eds.), Essential guide to qualitative methods in organizational research, pp. 34-45, Sage Publications, London.
Peterson, K.D., Wahlquist, C. & Boine, K., 2000, 'Student surveys for schoolteacher evaluation', Journal of Personnel Evaluation in Education 14(2), 135-153. https://doi.org/10.1023/A:1008102519702 [ Links ]
Piaget, J., 1929, Introduction to the child's conception of the world, Harcourt, New York, NY.
Reynolds, L. & Johnson, R., 2011, 'Is a picture is worth a thousand words? Creating effective questionnaires with pictures', Practical Assessment, Research, and Evaluation 16, 8. [ Links ]
Schwarz, N. & Sudman, S. (eds.), 1996, Answering questions: Methodology for determining cognitive and communicative processes in survey research, Jossey-Bass/Wiley, San Francisco.
Scott, J., 1997, 'Children as respondents: Methods for improving data quality', in L. Lyberg, P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, et al. (eds.), Survey measurement and process quality, pp. 331-350, John Wiley & Sons, New York, NY.
Steiner-Khamsi, G. (ed.), 2004, The global politics of educational borrowing and lending, Teachers College Press, New York, NY.
Stuart, J., Croft, A. & Akeampong, K., 2009, Key issues in teacher education, Oxford, Macmillan.
Tarsilla, M., 2022, 'L'engagement des enfants et des jeunes dans les évaluations', in L. Rey, J.S. Quesnel & V. Sauvain (eds.), L'évaluation dans le contexte du développement (English translation: Evaluation in development contexts), pp. 311-318, ENAP (National School of Public Policy) University Press, Quebec City.
Tourangeau, R., 1984, 'Cognitive science and survey methods: A cognitive perspective', in T. Jabine, M. Straf, J. Tanur & R. Tourangeau (eds.), Cognitive aspects of survey methodology: Building a bridge between the disciplines, pp. 73-100, National Academy Press, Washington, DC.
Triandis, H.C., 1972, The analysis of subjective culture, Wiley-Interscience, New York.
Tulving, E., 1983, Elements of episodic memory, Oxford University Press, Oxford.
UNICEF, undated, History of child rights, viewed 12 February 2023, from www.unicef.org/child-rights-convention/history-child-rights.
Wang, C. & Burris, M.A., 1997, 'Photovoice: Concept, methodology, and use for participatory needs assessment', Health Education & Behavior 24(3), 369-387. https://doi.org/10.1177/109019819702400309 [ Links ]
Correspondence:
Andrea Mari
zelig30@hotmail.com
Received: 02 Mar. 2023
Accepted: 29 Nov. 2023
Published: 28 Feb. 2024
Note: Special Collection: UNICEF Engaging with Children and Young People. The manuscript is a contribution to the themed collection titled 'Engaging with Children and Young People in Evaluation Towards a More Equitable World,' under the expert guidance of guest editors Dr. Michele Tarsilla and Mrs. Dalila Ahamed.