Services on Demand
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
South African Journal of Occupational Therapy
On-line version ISSN 2310-3833
Print version ISSN 0038-2337
S. Afr. j. occup. ther. vol.54 n.2 Pretoria Aug. 2024
http://dx.doi.org/10.17159/2310-3833/2024/vol54no2a2
RESEARCH ARTICLE
Face- and content validity of the University of the Free State In-Hand Manipulation Assessment Instrument (UFS IHMAI) for children in South Africa
Marieta VisserI; Mariette NelII; Nicke Van AswegenI, III, *; Jana BassonI, IV, *; Chante MacdonaldI, V, *
IDepartment of Occupational Therapy, School of Health and Rehabilitation Sciences, Faculty of Health Sciences, University of the Free State, Bloemfontein, South Africa. https://orcid.org/0000-0002-8825-4683
IIDepartment of Biostatistics, School of Biomedical Sciences, Faculty of Health Sciencs, University of the Free State, Bloemfontein, South Africa. https://orcid.org/0000-0002-3889-0438
IIIEden Private Practice, George, Western Cape, South Africa https://orcid.org/0000-0002-9789-755X
IVLanése Knipe Arbeidsterapie Private Practice, Bridge House School, Franschoek,, South Africa. https://orcid.org/0000-0002-9755-7475
VVan Romburgh Occupational Therapy Private Practice, Potchefstroom, South Africa. https://orcid.org/0000-0002-9801-5213
ABSTRACT
INTRODUCTION: No standardised assessment instrument that covers all the components of in-hand manipulation (IHM) with evidence of instrument development and psychometric properties appropriate for South African children is currently available for occupational therapists. The University of the Free State In-Hand Manipulation Assessment Instrument (UFS IHMAI) is in the process of development to gain recognition as a standardised assessment instrument for children in South Africa. This article reports on the first and second stages of the face- and content validation process of the UFS IHMAI.
METHOD: A quantitative descriptive study design with a convenient sampling method was used. Participants provided their expert judgment by completing an EvaSys© online questionnaire.
RESULTS: Fifty-five occupational therapists with experience in paediatric hand function, and registered with the HPCSA and OTASA participated. The participants agreed (above 80% consensus) that the instrument's content is relevant and representative to assess all components (separately and as a whole) of IHM that it was intended to measure. Participants' comments and practical recommendations will form an important knowledgebase for the instrument developers to use in the third stage of content validity, namely revising and refinement.
CONCLUSION: The results confirmed the face- and content validity of the UFS IHMAI and supported the further development and psychometric testing of the assessment instrument for children in South Africa.
IMPLICATIONS FOR PRACTICE:
This article builds upon prior studies in which therapists in South Africa have recognised the necessity for an in -hand-manipulation (IHM) assessment. It offers a concise overview of instrument development theory and delineates the iterative process employed in developing the first draft of the University of the Free State In-Hand Manipulation Assessment Instrument (UFS IHMAI). Additionally, this article furnishes background details regarding the proposed instrument's content, administration, and scoring guidelines. It also incorporates therapists' perspectives as end-users, providing informed perceptions and consensus on the content validity of the proposed instrument. The suggestions provided by the participating therapist working within the South African context can be employed by the researchers to inform the revision, reconstruction, and refinement of the instrument. Subsequently, the revised version will undergo another round of content validity testing (with experts in instrument development, and in the field) and other psychometric testing. As instrument development is an iterative and ongoing refinement process, the implication of this research might only become apparent after further studies.
Keywords: instrument development, psychometric properties, psychometric testing, standardised assessment, health and wellbeing
INTRODUCTION
Evaluation of in-hand manipulation is an important role of the occupational therapist working with children1-4. A broad overview and critical appraisal of published IHM assessment instruments for children, to determine whether they comply with all the requirements of a sound assessment instrument, was reported in a recent scoping review5. This scoping review's findings reflected that from the eleven available instruments that have been reported6-16, "none had comprehensively completed the instrument development process to the point of standardisation with evaluated psychometric properties"5:1. Consequently, further refinement of existing instruments or the development of new instruments was recommended.
This need for an IHM assessment instrument for children was also confirmed by a recent South African study17 where occupational therapists' current assessment methods and their preferences for suitable instruments were explored. From this study17, it was evident that paediatric therapists have limited familiarity with published IHM instruments and assess IHM mainly through informal observations. They voiced a critical necessity for a well-developed and scientifically sound IHM instrument17. Two other South African studies described the IHM skills of 353 South African children, determined by means of the In-Hand Manipulation (FSU IHM) Checklist13,18. The data of these studies provide valuable descriptive developmental indicators for children's IHM for ages 4 to 7 years of age. However, this checklist was designed as a data collection instrument without undergoing a comprehensive instrument development process and could not be regarded as generalisable to the South African population.
These studies justified the development of a new IHM instrument, and a formal process for the development of the UFS In-Hand Manipulation Assessment Instrument (UFS IHMAI) was commenced for children in South Africa. Instrument development is a scientific process that involves several systematic steps there is no simple, predetermined guide to plan, develop and validate an assessment instrument. Developing valid and reliable assessment instruments can be a costly, time-consuming and iterative process19,20. In addition, complex statistical analyses are often required to determine and establish the psychometric properties of an assessment instrument19-22.
When a new instrument is developed, before field testing can begin, an important first step in psychometric testing is to determine if relevant and representative content of the targeted construct has been included in the assessment instrument23,24. This article therefore aims to describe the face- and content validity of the UFS IHMAI for children in South Africa.
LITERATURE REVIEW
The UFS In-Hand Manipulation Assessment instrument
For instrument development, different sequential stages are recommended, but there is no one standard process to follow. "Instrument development is an ongoing process that arguably, has no clear endpoint"25:174, and often, instrument development is not a linear process but an iterative process of refinement. During the planning and development of the UFS IHMAI, a variety of processes suggested by different authors were considered for their contributions on the topic19,20,22,24-31. These authors were considered for their relevance in instrument development, providing the researcher with multiple perspectives and a comprehensive understanding of the topic. The instrument development process was also prescribed by the researchers' clinical experience, review of the literature and review of existing instruments.
Identifying the need for the instrument
A study that described the current and preferred IHM assessment methods used by occupational therapists in South Africa identified the need for an IHM instrument. This study emphasised therapists' need for a standardised, norm-referenced, contextually relevant IHM instrument for paediatric practices in South Africa17.
Additionally, a recently published scoping review5 on existing IHM instruments, which appraised and compared existing instruments indicated none of the eleven instruments had comprehensively completed the instrument development process to the point of standardisation with evaluated psychometric properties5. The conclusion and recommendations from this scoping review therefore also justified the development of a new comprehensive instrument for the South African paediatric population.
Identifying the type, purpose, and approach of the instrument
Assessment instruments can be classified in numerous ways, although terminology is used inconsistently across the literature27. However, the type of instrument, the purpose and the approach as guided by the literature, were used during the development of this IHM instrument. The type of instrument that was identified by the researchers and therapists in South Africa17 as a need was a standardised, norm-referenced instrument. A norm-referenced standardised assessment refers to an assessment instrument designed to measure an individual's abilities within the norm for their age group, and has uniform procedures for administration and scoring. These assessments undergo a process of development to ensure data are collected systematically and accurately and have psychometric rigidity32,33. "Norm-referenced instruments are used to discriminate between participants, predict the results of some tests, or evaluate change over time"34:3. This norm-referenced instrument is designed to portray differences among children's IHM skills along a continuum of values and indicates, for example, how children of different ages score on IHM skills27,35
Depending on the purpose of the evaluation, the literature refers to descriptive, predictive and evaluative instruments. This descriptive instrument will use criteria to describe a child's status (IHM skills) at a particular moment in time, and may involve comparing the results of the children with group norms. While predictive instruments classify individuals and are used to predict a specific outcome, evaluative instruments use criteria/items to measure a change in an individual/group over time28,31. All three of these can be considered in assessing IHM in separate or one instrument.
The specific approach to assessing outcomes in an instrument is also an important factor to consider. The naturalistic observation approach "attempts to capture a child's real-life skill performance allowing an objective assessment in common childhood activities"36:117. Two other approaches considered were occupation-based and component-based assessments. The occupation-based assessment permits the therapist to focus the evaluation of children's occupational performance on their meaningful occupations in relevant environments. The component-based assessment allows the therapist to focus on the evaluation of a child's performance components (client factors) to identify possible underlying factors that can potentially cause occupational performance difficulty (also referred to as concept clarifications)37. But, one can also assess the components within occupations. Hence, in developing this instrument, the researchers considered both approaches, but a predominantly component-based assessment with elements of occupation-based activities was compiled12. Although the phrase "let's play" is used in the administration guidelines, the children do not engage in occupation-based play activities per se, as defined within the occupational therapy domain38. Children are clearly instructed on what to do in the assessment, and the activities are not "freely chosen, intrinsically motivated, internally controlled"38:71. Play activities from an occupational therapy perspective are more multidimensional and complex than what can be allowed for by this instrument's activities. The UFS IHMAI was developed as a standardised, norm-referenced, descriptive, component-based assessment instrument to evaluate the IHM skills of children in South Africa.
Theoretical foundation and construct identification
The next stage in the development process was to articulate the construct to be measured by the instrument and all fundamental aspects of the construct. For instrument development, the construct refers to the aspect that will be assessed. The developer should identify, define and delineate the relevant construct and subconstructs (domains) to be included in the instrument. A well-defined construct will provide the foundational knowledge and set the boundaries for the subconstructs to be included in the assessment instrument19,22. The literature provided conceptual definitions of the constructs and subconstructs of IHM that served as a conceptual framework. The conceptualisation of the term IHM has been developed since 1984 from the foundational work of researchers in the field such as Elliot and Connolly39, Exner13, and Pont, Wallen and Bundy4. The Modified Classification System of In-hand Manipulation was the latest contribution in this particular area4. The establishment of this classification system assisted researchers in determining the construct (and subconstructs) - or as in this case, called 'domain component- a priori, as opposed to a posteriori (if none existed)7. The UFS IHMAI is based on the six distinct domain components of this classification: (i) finger-to-palm translation to achieve stabilisation; (ii) palm-to-finger translation; (iii) simple shift; (iv) complex shift; (v) simple rotation; and (vi) complex rotation, to ensure that all in-hand manipulation components are included4.
Item generation
The next stage is to generate appropriate items for each domain component of IHM, which is also called item 'pool generation'15,19,20,. The item generation started with a literature study and appraisal of previously existing IHM assessment instruments5,10,13,15,18,40, based on the researchers' clinical experience, and formal expert input from clinicians in South Africa17 to avoid construct irrelevance23. The target population for the assessment instrument was considered by identifying and/or generating contextual, relevant items, familiar objects in everyday tasks, low cost and readily available to the instrument developers and therapist to replace15. Specific needs from clinicians in South Africa (from a related study) were also taken into account17.
A final combination of 14 items was pooled for the UFS IHMAI (Image 1, adjacent). Each of the six different domains (subconstructs) of IHM consists of two to four different items to avoid construct under-representation23. Items are structured as a short "game", task or activity, including a pegboard game, unscrewing a container lid, a money game, a piggy bank activity, a marble game, a dressing game, a stringing beads activity, a pencil game, fanning cards, nuts and bolts and a key activity. These test items were constructed to assess IHM with and some without stabilisation. Hence, the two major threats to content validity, namely construct irrelevance and construct under-representation, were adressed23.
Apart from the above literature, the selection criteria considered in the development of all the items, were similar to the criteria described by Chien et al.12, namely (i) to be representative of common childhood occupations that require IHM; (ii) to present specific difficulty and mastery (age-appropriate) to children ages 3-12 year; (iii) to be easily instructed and observed while placing minimal demands on language, cognition, and perception; and (iv) to have minimal gender or cultural bias towards children when performing the tasks12.
The administration and scoring system
Generating the administration and scoring system/procedure as a guide to accompany the instrument was part of this stage of its development20. The administration guidelines (Table I, below) include a layout of each test item (Image 1, above) concerning the activity, IHM component, equipment/material, the layout of equipment, a picture of the layout, what the assessor says, what the assessor does, practice item, trials, scoring and stop rules (Image 1, above). The scoring guideline consists of a scoring scale, quality of tasks, the speed of performing a task, control of objects and compensation methods used (Figure 1, page 9).
To determine the format of the instrument, the process of collecting information and converting it into a score was another important aspect to consider25. For this instrument, the therapist conducting a clinical assessment with a child will be the mechanism by which information is collected. During the assessment, the therapist will use the scoring guideline to record the child's scores and afterwards, it will be calculated and translated into numbers. The therapist will follow the prescribed administration and scoring guidelines carefully.
The formulation of a scoring guideline was based on the scale construction of the Assessment of Children's Hand Skills (ACHS)12, the Functional Repertoire of Hand Model41, the Children's Hand Skills Framework (CHSF)42, the content of Early Learning Outcome Measures (ELOM) assessment guide43, and recommendations from the study by Kruger et al.17, to create a preliminary research version.
Determining psychometric properties of assessment instruments
Rudman and Hannah28 stated the importance of why an assessment instrument should be applicable to what the therapist aims to assess. By developing an instrument evaluation framework, Rudman and Hannah28 implied the following aspects as important for selecting an assessment instrument: clinical utility, standardisation, purpose, psychometric properties and the client's perspective. Psychometric properties consist of item construction, reliability, validity and establishing norms28. For item construction, the items of an instrument must be equivalent to the test's purpose, and a rationale must be included based on item selection28. After item construction, the assessment instrument's validity and reliability need to be established.
There are four different types of validity, namely face, content, construct and criterion validity. Face validity is defined as how an assessment instrument appears to be valid from a test taker's perspective44-46. Although not always quantifiable44, it "promotes rapport and acceptance of testing and test results on the part of test takers"20:169. Content validity refers to the adequacy of an instrument to cover the complete construct of content it sets out to measure, meaning the content of an IHM assessment instrument should be relevant and adequately representative of all IHM domains (relevant content should be included, while irrelevant content should also be excluded). To ensure the content validity of an instrument, the construct and domains being measured need to be conceptually defined. Only then can items be selected and constructed to represent the construct sufficiently27,44,46. According to Boateng et al., besides content relevance, representativeness and technical quality are also important19.
Many authors have proposed methods of content validation referred to as recommended steps, or guidelines, but mostly demonstrated a similar sequence of content validation. Some literature suggested a three-stage process to evaluate the content validity of an instrument: firstly, a priori effort (or developmental stage) where the researchers use their clinical experience, review relevant literature and review existing instruments19,23,24,46. This stage has three steps: domain identification, item generation and instrument formation. Secondly, (stage two) a posteriori effort (judgment-quantification stage) to evaluate the relevance of the instrument's content (each item and the total scale). In other words, to evaluate to what extent the items developed are representative of the construct and the degree to which the items represent the construct they were developed to assess (their relevance)19. Also seeking multiple expert judgment of items constructed, and obtaining perceptions of experts who will have to respond to the assessment instrument (the focus of this article)20,23,24,45,47. Thirdly, the revising, reconstructing, and refinement stage is where the instrument developer can employ the expert's comments (retained, modified, omitted, or added to the instrument under development20,47, as described in recommendations).
For stage two, numerous methods of quantifying experts' degree of agreement regarding the content relevance of an instrument have been proposed. This could be evaluated by obtaining experts' judgment (or consensus) or using statistical measures23 i.e. the Content Validity Index (CVI). Content validity can also be established by using a predetermined criterion of acceptability (consensus)20,46,47. These methods may involve, for instance, focus groups, Nominal Group Techniques, and online surveys. In both the IHM assessment instrument developed for adults40 and the Assessment of Children's Hand Skills12, expert groups were used to establish content validity. For this study, a larger group of clinical experts47 (in this study, occupational therapist as the end-user) reachable via an online content validity survey with a predetermined criterion of acceptability, using the CVI statistical tecqnique47 to determine the second stage of content validity.
METHODS
Study design
A quantitative descriptive study design31, using an electronic EvaSys© questionnaire, was conducted to address the research aim to describe the face- and content validity of the UFS IHMAI. The content validity guidelines recommended in the literature during initial instrument development were followed for this study44-47.
Study population and sampling
Limited literature provides guidelines for selecting and using content experts for instrument development and the number of content experts required to evaluate an instrument23, but there is no consensus on the number of experts to include23. Various kinds of "experts" may also be involved. Expert judges are considered as individuals with extensive knowledge in a specific subject area, such as instrument development, or about the domain being studied47 (e.g., IHM assessment), or possess experience in clinical practice47. Alternatively, target population judges, or also called end-users of the instrument19,23, may include therapists. For this study, the term expert and number of experts were determined based on factors including the phase of the instruments' development (acknowledging that this initial draft will require further rounds of in-depth expert content validity testing), the chosen data collection method, the level and breadth of knowledge among clinical therapists, and well-defined criteria46-47.
The study population consisted of occupational therapists working in different contexts and registered with the Health Professions Council of South Africa (HPCSA) and the Occupational Therapy Association of South Africa (OTASA). Membership of OTASA is not compulsory, so only the 2511 members of this association represent over 5000 occupational therapists in South Africa. Therapists were conveniently sampled for this study27. All the responses that the researchers received were used in the analysis of the data.
Although content validity relies on the subjective consensus of experts, the selection of experts (as end-users)19 to review and critique the instrument was regulated by the following well-defined inclusion criteria23:
• occupational therapists registered with the HPCSA and OTASA;
• therapists who have varied clinical experience with at least 2 years in paediatric hand function assessment; and
• therapists who have access to the internet and have an email account.
Measurement instrument
Data were collected through a self-developed online questionnaire via the EvaSys© survey system48. The questionnaire was compiled with items formulated from the literature regarding IHM6,8-15, instrument development25,28, psychometric properties (specifically face- and content validity),6,11,12, 25,27,46, and based on the IHM assessment instrument under development. The questions in the questionnaire were supported with photos taken of children's hands during a simulated evaluation with the assessment instrument, as well as definitions of the IHM components to guide the participant in completing the questionnaire. For each of the questions, the participants were asked to respond yes or no to the questions. All questions had an additional comment section where therapists could have included opinions and recommendations.
The questions were available in English (the primary official language of communication of the HPCSA & OTASA) and divided into five sections:
• background information about the participant;
• the instrument as a whole;
• the instrument's subtests;
• the administration and scoring guidelines; and
• general questions regarding the assessment instrument and recommendations.
Pilot testing of the questionnaire
A pilot study was conducted with four occupational therapists conveniently sampled, who met all the inclusion criteria to participate in the study. Two therapists reviewed a hard copy of the questionnaire and provided feedback on the content, clarity of questions and layout. After their recommendations were applied to the questionnaire with all the related photos, it was converted onto the EvaSys© survey system48. Another two therapists reviewed the electronic questionnaire on EvaSys© and provided feedback regarding the layout of the questionnaire with the photos, the technical aspects of responding to the questionnaire items on EvaSys©, the duration and ease of completing the questionnaire online. After the pilot study, final amendments (such as grammatical and editorial corrections, formatting the appearance on EvaSys©, and ensuring that the final link to access the survey worked) were made, and the questionnaire was uploaded onto the EvaSys© survey system48. The pilot study data were not included in the final analysis.
Data collection procedures
Arrangements were made with the administrator of OTASA, who distributed the emails to their registered members with the necessary information regarding the study and access to the questionnaire using their electronic database. A link to EvaSys© to access the study information and questionnaire was made available online for one month, where the occupational therapists willing to participate could complete the questionnaire in their own time. Occupational therapists received a reminder email after two weeks. At the end of the questionnaire completion period, all the questionnaires were exported from EvaSys© to a Microsoft Excel spreadsheet and stored safely on an external device. All electronic data were stored and backed up on an approved password-protected cloud software platform (EvaSys) to ensure that all personal data of participants are safely stored behind a secure firewall. The data will be stored for ten years after the completion of the study and retained for at least five years from the date of publication (since possible future patency of the instrument needs to be considered).
Data analysis
Descriptive statistics, namely frequencies and percentages for categorical data and percentiles for numerical data, were calculated. Content validity ratios were determined per question and the content validity index (mean of CVR values). The data analysis for this paper was generated using SAS software49.
Ethical considerations
Ethical approval for this study was obtained from the Health Sciences Research Ethics Committee (HSREC) of the University of the Free State (reference UFS-HSD2019/0224/2304). The participants were informed about the study through an information letter, and voluntary completion of the questionnaire implied informed consent. Participants not complying with the inclusion criteria after completing the background information section were denied access to the rest of the questionnaire. All personal information received from the pilot and the main study was anonymised and kept confidential throughout the study.
RESULTS AND DISCUSSION
Demographic profile
The demographic profile of the 55 participants regarding their age, experience and practice setting was small, but similar to other online survey studies17, as displayed in Table II (below). The two main practice settings indicated, were schools (n=70,127.3%), and private practice (n=40, 72.7%). Fields of practice were mostly paediatrics (n=45, 81.8) with other fields including physical adult rehabilitation (i.e., neuro, hand, and vocational) (n=8, 14.5%), academia (n=1,1.8%), and geriatrics (n=1,1.8%).
The participants mainly made use of the following informal ways to assess IHM: observation (n=36, 65.5%), drawing, writing, colouring (n=7,12.7%) and activities of daily living (n=6,10.9%). The participants used the following standardised assessment instruments: the Movement Assessment Battery for Children Second Edition (Movement ABC-2)50 (n=3, 5.5%), the Miller Functional Assessment and Participation Scales (M-FUN)51 (n=3, 5.5%) and the Sensory Integration and Praxis Tests (SIPT)52 (n=2, 3.6%). In the "other" response section, standardised instruments such as the Bruininks-Oseretsky Test of Motor Proficiency Second Edition (BOT"-2)53, the Bayley Scales of Infant and Toddler Development Third Edition (Bayley®-III)54, self-developed informal hand function checklists, and the Purdue pegboard tests55 were listed. Congruent with the literature17, limited familiarity with published IHM instruments was demonstrated. Participants relied on observation, activities of daily living or standardised developmental assessment instruments.
Concept clarification of in-hand manipulation
The second section of the questionnaire dealt with questions regarding the concept clarification of IHM. Most participants (n=51, 92.7%) regarded that the concept clarification section assisted them in recapping and/or understanding the related IHM components in the assessment instrument. Participants regarded the concept clarification section to potentially "ensure that all terms are correctly understood by all users" (as stated by one participant), to be an essential part of the instrument, and when used, might contribute to the instrument's reliability. This feedback confirmed that a concept clarification section is necessary, as it provides the same baseline theoretical information for the administrators to perform the assessment.
A clear articulation of the construct and sub-constructs (domains) of an assessment instrument is one of the qualities of a well-developed instrument25. Definitions to clearly distinguish between items are an important part of instrument development and content validation19. Such a section is often seen in properly developed assessment instruments.
Face- and content validity of the instrument subtests
Content validity can be established using a predetermined criterion of acceptability (consensus)23,46,56. For this study, consensus to establish the face- and content validity was defined as a positive agreement with a question by at least 44 (80%) participants.
Table III (below) shows that finger-to-palm translation had a 100% (n=55) agreement for face validity. All the questions relating to the content validity had an agreement between 96.4% (n=53) and 98.2% (n=54). Only the recommendation "modify" fell below 80% agreement for finger-to-palm translation 74.6% (n=41).
Results regarding palm-to-finger translation had a 100% (n=55) agreement for face validity, and questions relating to content validity, had an agreement between 96.4% (n=53) and 100% (n=55). Both the recommendation "add/remove" 78.2% (n=43) and "modify" 74.6% (n=42) fell below 80% agreement for palm-to-finger translation. Participants remarked that finger-to-palm translation is assessed with a variety of items (three), each with different objects (money coins, marbles and dowels) and allows the manipulation of different shapes, sizes and textures. With this variety of finger movements and levels of difficulty required from the child, content under-representative is prevented23,45.
Regarding the marble game, the suggestion was to include more "purpose" to this task by asking the children to put the marbles on a specific picture printed on the towel. Some participants expressed concern about using marbles with young children who might swallow them. However, general safety measures will be incorporated into the guidelines for all test items to avoid any choking hazards. For the dowels in the pegboard game, one participant suggested that the instrument manufacturer must ensure that the dowels are smooth and fit well into the pegboard. Different opinions were given about whether the thickness of the dowels might influence the required level of IHM and if different levels of accuracy would be observed in varying age groups.
However, the pegboard game's dimensions and dowels were based on recommendations from similar instruments11,13,18,57, with the board's dimensions 100x100x20 mm, with nine holes (15 mm deep χ 7 mm in diameter, 32 mm apart) and the dowels 32 long χ 7 mm in diameter. Regarding the use of real five-rand (R5) money coins in the piggy bank activity, a few participants stated that it was good to use everyday objects but advised instead to use plastic "play" money or buttons. Recommendations for three-dimensional (3D) production of play money with similar dimensions to a R5 coin is applicable for future manufacturing in accordance with other instruments used in previous research6,8,11,13,18
Simple shift had a 90% (n=50) agreement for face validity, and an agreement between 85.5% (n=47) and 96.4% (n=53) for content validity. Only the recommendation "add/remove" fell below 80% agreement for simple shift 72.7% (n=40).
Regarding the dressing game, it was advised that a thicker, more durable fabric be used to make the dressing boards or to use actual clothing. Furthermore, it was recommended to use only one medium-sized button as opposed to buttons of different sizes. Concerning the activity involving stringing beads on a pipe cleaner, it was advised that the guidelines should state how the forearms and wrists should be stabilised on the table. When the child has assumed the correct position, it might ensure an isolated finger shift movement instead of a wrist or whole hand movement when putting the bead on the pipe cleaner. Although small beads were recommended for older children, the same medium- sized beads are advised for future manufacturing for all ages.
The results regarding complex shift had a 98.2% (n=54) agreement for face validity. All questions relating to the content validity had an agreement between 89.1% (n=49) and 100% (n=55). Regarding the piggy bank activity, no other recommendations were made apart from using "play" money.
For the fanning card game, it was suggested to use high quality cards that are smooth and grade this activity carefully for different ages by using fewer cards for younger age groups. A practical adaptation for the key activity was recommended, namely to have a step for young children to climb on to reach the door handle, or to have a devised door handle lock unit with fitting keys as part of the instrument. Different sized keys were recommended for young children, but the same medium-sized keys are advised for future manufacturing for all ages, as with all the other subtests. Simple rotation had a 94.5% (n=52) agreement for face validity, with an agreement between 81.8% (n=45) and 100% (n=55) for content validity. For complex rotation, face validity had a 100% (n=55) agreement and an agreement between 94.5% (n=52) and 100% (n=55) for content validity. Only the recommendation "add/remove" fell below 80% agreement for simple rotation 72.2% (n=40).
Regarding the activity involving the unscrewing of a container lid, the rotation of the lid between fingertips (whilst a coin is rotated in the lid), the money rotation between fingertips, and the peg rotation activity, no comments or recommendations for adapting the item were made. In terms of the nut and bolt activity, it was stated that "play" nuts and bolts could also be used and manufactured through 3D printing. Careful attention to possible compensation methods, such as a child releasing and re-grasp and using wrist movements rather than IHM, was recommended. For the money flipping game, replacing the laminated cardboard with a piece of fabric or a small towel was suggested to prevent the coins from slipping while flipping them. Easier, more understandable wording for the instructions in the guidelines was also proposed. Regarding the pencil flipping game, it was advised to consider the pencil grip development of younger children (3/4-year-old) and preferably include different pencil sizes.
General comments and/or recommendations for the instrument's subtests
General comments were made that the items were culturally relevant, functional, representative of daily childhood activities, using everyday objects, appropriate for children of different ages, and incorporating different levels of difficulty in items (for other age groups). Although the instrument was not intended to assess children's participation in a naturalistic, real-life context58, it does have elements of real-life IHM activities. The findings were supported by the literature12 that was considered during the instrument development phase.
A recurring theme from the participants' comments was the size of the objects used in the instrument. Since the opinion was that the size of objects might require different related hand function skills and degrees of difficulty, using different sizes for different ages was suggested. In contrast,, using the same dimensions (constant construct to assess) for items (i.e., marbles, money, buttons) for all age groups will allow the instrument to determine internal domain differentiation between different groups and provide different age norms during the standardisation of the instrument59. A paucity in the literature correlating object size and IHM is evident. Evidence that could be considered during the refinement of this instrument relates to the perceived sizes of an object and how it is seen in terms of the actions that the object affords60. The grasp ability and object size are specific to objects within the apparent grasp ability of the hand. Hand dominance and age might also play a role in how children perceive graspable objects60. However, the object's size is only one aspect that reflects the manipulation requirements encountered in daily life activities61. All aspects of the object's geometric properties (size, shape and texture) and the material properties (rough, smooth, slippery, sticky, compliant) need to be considered62. Electromyography (EMG) signal information obtained before the hand is in contact with an object showed that shape, size and surface properties (such as pre-shaping the hand for grasping a soft toy) have more impact on the muscular system than the actual weight of the object to be grasped62.
Content validity was verified by participants' agreement (expert therapists) of the adequacy with which the instrument assesses the separate components of IHM. Although the target population (for this instrument children) is recommended for content validation studies47, children will only be used during future studies. Face- and content validity of the instrument as a whole The same predetermined criterion was used for the instrument as a whole46 as for the subtests. Consensus was defined as an agreement with a question by at least 44 (80.0%) participants. A 98.2% (n=54) agreement indicated that the assessment instrument could assess IHM of children in South Africa, as shown in Table IV (below).
The only question that did not achieve an 80% agreement was relating to the age appropriateness of the instrument 78.2% (n=43). Participants provided valuable remarks regarding the appropriateness and grading of the activities to consider in the refinement of the instrument. For example, a participant posed the question: "Can we make it more fun?" In congruence, the assessment of children must be done in an interactive, fun63 and child-friendly space to ensure the child's optimal engagement. However, the evaluation process of children is a complex process requiring the therapist to adhere to administration guidelines.
Feedback indicated that the instrument should assess the construct in the least possible amount of time regarding the administration time. This would depend on the type of assessment i.e., initial comprehensive assessment or in-depth fine motor specific), age and concerns with possible pathology observed in the child. It was remarked that 15-20 minutes would be too long for the initial comprehensive assessment of young children, whereas others proposed at least 30 minutes per child. According to the eleven IHM assessment instruments described in a recent scoping review5, a duration of 5-7 minutes was the shortest possible administration time, and 20-30 minutes was the longest administration time. However, these tests all varied in the number of test items (ranging from three to 55 items). Therefore, considering that the UFS IHMAI consists of 14 items (including trial items), it might be more realistic to presume the administration time might be around 20-30 minutes.
Further recommendations included the downgrading of some items and instructions for children younger than five years, changing the current age interval from 3-12 years to 5-12 years, and structuring the scoring guidelines to allow for age differentiation5. Contrarily, eight of the 11 instruments described in the scoping review included children under five years in their age range. Therefore, it is recommended that the age range is not changed at this stage, but should be established only after field testing (establishing construct validity) of the UFS IHMAI was performed and item difficulty levels displayed21.
Conclusively, face- and content validity was established for the instrument as a whole, with most questions posed reaching an agreement of above 80%.
Administration guideline
It was evident that the administration and scoring guidelines are appropriate for this specific assessment instrument and would assist with the execution of each item. All the participants (n=55; 100%) agreed that the administrator's material and/or equipment for each assessment item were clear. According to 94.6% (n=52) of the participants, the administration guideline's wording and layout pictures are appropriate and clear for direct administration of the instrument. Recommendations for future improvement were to shorten the instructions for the children, reconsider using words such as "palm" and "flip over", making the instructions for the therapist bold or in different text colour, and language editing the instructions. According to the literature, the instrument should be reviewed to prove technical quality (i.e., format, printed layout, grammar, wording, layout) in content validity35. The wording of the instructions should be carefully, clearly, and concisely constructed47 and appropriate for the child being assessed35, or else it might contribute to measuring error. It was also suggested by participants and in the literature to include a background section where the purpose, population, construct being assessed, and development of the instrument could be presented with supporting evidence35. Further development of this instrument aims to establish cultural fairness and translation of the administration instructions across different ethnolinguistic groups as per steps outlined by Peña64-66 and the COnsensus-based Standards for the selection of Health Measurement INstruments (COSMIN)67
All participants (n=55; 100%) and most (n=49; 89.1%) participants, respectively, agreed that the instructions to be demonstrated and the verbal administration instructions were clear and easy to understand. Most participants (96.4%, n=53) felt that the practice items allowed for the child were clear and easy to understand for each item and that the stop rules were appropriate. However, a better explanation of what is allowed from the therapist during the practice run is required. For example, how much time and how many practice opportunities are allowed? Is it allowed to "teach" the child how to do the task? Can the therapists provide hands-on physical support? Can the therapist demonstrate while giving verbal instructions? The stop rule section's wording of each item should be refined, which will be addressed accordingly during the refinement of the instrument. The administration and scoring guidelines could be more specific regarding measuring time (i.e., with a stopwatch or estimated) and whether there is a time limit for each item (i.e., stop the item after 2 minutes). They recommended that the administration manual of this instrument should include pictures/graphic material to improve their understanding regarding these aspects. Clarification on the general handling of children could be included; e.g., how to handle children with poor concentration and allowing appropriate breaks. The inclusion of safety/preventive measures (e.g., preventing the swallowing of marbles) and a specific section on possible compensatory methods to "look out for" were suggested for the administration guideline.
Scoring guideline
The majority of participants 96.4% (n=53), considered evidence of a scoring scale for the instrument. The instrument's benefit is that it takes into account the quality of a task, speed16 when performing tasks, control of objects, and compensatory methods used, consistent with other instruments. The refinement of the scale implies that a differentiation between the left- and right-hand scores needs to be added to the scoring form. In accordance with available IHM instruments described in the scoping review by Kruger et al.5, most instruments (except one) only assess the dominant hand and discourage the use of the other hand. However, it is argued that the UFS IHM instrument should allow the assessor the option to assess both hands or only the dominant hand (especially for children whose dominance is not established or those who are ambidextrous).
General comments regarding the assessment instrument
The last section of the questionnaire contained questions regarding further development of the assessment instrument. All participants (n=55; 100%) indicated the need for a standardised IHM assessment instrument for South African children to guide treatment planning and measure outcomes. Most (n=53; 96.4%) participants felt that the instrument should be developed further, that it would be valuable to establish all psychometric properties for this instrument (n=52; 94.6%), and that it would be valuable if the standardisation of age norms could be established for the diverse South African paediatric population (n=54; 98.2%). Once these norms are available, it will be valuable to include age expectations for each item in the guidelines. These findings are in accordance with results published in previous studies in this field13,17,18.
Most participants (n=53; 96.4%) indicated they would like to use such an instrument in their practice, and (n=54) 98.2% would purchase the UFS IHMAI when made commercially available. Participants made suggestions for a prefabricated instrument with a printed manual, allowing the copyright of assessment sheets to increase the instrument's validity and reliability. One participant also commented that it would "ensure that the research that is done is translated to practice". Still, some participants indicated they would prefer a self-fabricated (free) instrument, making their own test items but buying the manual. However, this option will open the assessment process to many variables and hence will not be possible for this type of instrument. Although publication of an assessment instrument requires a considerable investment in time, financial resources and expertise if intended for commercial distribution20, it is intended to develop the UFS IHMAI into a valid and reliable standardised instrument for obtaining reliable data on South African children's IHM skills.
Additionally, 81.8% (n=45) of the participants agreed that such an IHM assessment instrument should form part of a more comprehensive hand function assessment instrument, although it can also be useful on its own. A more comprehensive assessment instrument would allow for aspects such as reaching, grasping, manipulating and other fine motor tasks to be evaluated67. Most (n=44; 80%) concurred that an instrument would assist with a more accurate assessment of children with fine motor difficulties and better treatment planning, and most (n=41; 74.6%) participants did not practically use or were aware of any specific hand function assessment instruments that have an IHM section.
Content Validity Ratio (CVR) and Content Validity Index (CVI) of the instrument
The CVR provides insights into the individual items, whereas the CVI is the mean of the CVR47 (as presented in Table IV (page 12) for each section, and for the instrument as a whole). Most questions were deemed positive, as seen by the CVR values in Table V (below). The questions regarding validity were all high (>0.8). The average CVI for all the questions was 0.83, which indicates that the questions were relevant. Values for CVI can range between 0 and 1.
Limitations of the study
The occupational therapists who evaluated this instrument were independent of the developmental process and regarded as expert judges or target population judges, practicing in the field19,47. However, in subsequent rounds of content validity testing for this type of instrument, experts with more advanced, and varied levels of expertise might be necessary. The response rate was lower than in other online surveys17, and the response rate may have increased by the use of a snowball sampling method and direct recruitment of participants. Using a quantitative survey methodology to evaluate the face and content validity of this study provided objective descriptive data. However, using different research methods (i.e., qualitative) and other sources (i.e., children) can augment the future psychometric studies of this instrument.
At the time of the study, the EvaSys® survey system48 could not support video recordings of the children using the assessment instrument. However, the questionnaire provided definitions and photos of all assessment activities as a visual guide to all questions. The questionnaire was detailed, and although it predominantly consisted of closed-ended questions, most questions also had the option to provide opinions and suggestions (open-ended). The answers to the open-ended questions were valuable, detailed and practical, and could be incorporated into the refinement of the instrument during the next stages of its development and psychometric testing. Although not the aim of the study, some of the questions and feedback from this study provided evidence on the instrument's clinical utility regarding applicability and practicality.
Recommendations
Concerning further research in the continuous development of this instrument, the following recommendations are proposed:
• consideration of participants' recommendations from this study into the current instrument refinement before field testing;
• continuous refinement of the instrument, followed by field testing;
• another round of content validity testing on the revised instruments using experts (in instrument development, with more advanced levels of experience in hand function assessment of children) using CVI calculations.
• further psychometric testing;
• the translation of the instrument for application in the main South African linguistic groups;
• the refinement and revision of the administration manual and scoring guidelines into an online version;
• development and production of assessment kits;
• developing an electronic administration and scoring data capturing system; and
• the development of an intervention guide to support the assessment.
CONCLUSION
This study offers evidence supporting the face and content validity of the UFS IHMAI. Content relevance, content representativeness and technical quality were determined through expert judgment by qualified occupational therapists. The study confirmed that the proposed conditions necessary to claim content validity were met for stage one (the developmental phase with three steps of domain identification, item generation and instrument formation) and stage two of the content validity process (the judgment-quantification stage.
The findings of this study provide practical information for the third stage of the content validity process, namely the revising and refinement of this newly developed assessment instrument. It is recommended that research continues for the evaluation of psychometric properties, and standardisation into a norm-referenced test for the clinical assessment of South African children's IHM skills, to improve assessment practices and support evidence-based practice in occupational therapy.
Author Contributions
Marieta Visser identified the research topic and supervised the study. Marieta Visser, Nicke Orffer, Chante MacDonald and Jana Basson formulated the research aims and objectives, and contributed to the conception and design of the study. Nicke Orffer, Chante MacDonald and Jana Basson collected the data. Mariette Nel analysed the data. Marieta Visser, Nicke Orffer, Chante MacDonald and Jana Basson interpreted the data and prepared the first draft of the manuscript for the first round of reviews. Marieta Visser and Mariette Nel refined the second review round and finalised the final version of the manuscript. All authors approved the final manuscript.
Acknowledgements
Occupational therapists who participated in the study, and Dr. Daleen Struwig, medical writer/editor, Faculty of Health Sciences, University of the Free State, for technical and editorial preparation of the manuscript.
Conflicts of Interest
The authors have no conflict of interest to declare
REFERENCES
1. Exner CE. Manipulation development in normal preschool children. In: 66th Annual Conference of the American Occupational Therapy Association, Minneapolis, Minnesota, 22 April 1986. [ Links ]
2. Exner CE. The zone of proximal development in in-hand manipulation skills of non-dysfunctional 3- and 4-year-old children. American Journal of Occupational Therapy 1990;44(10): 884-891. [ Links ]
3. Exner CE. Intervention for children with hand skill problems. In: Henderson A, Pehoski C. Hand Function in the Child: Foundations for Remediation. 2nd ed. Maryland Heights, MO: Mosby Elsevier; 2006: 239-266. [ Links ]
4. Pont K, Wallen M, Bundy A. Conceptualising a modified system for classification of in-hand manipulation. Australian Occupational Therapy Journal. 2009;56(1): 2-15. http://dx.doi.org/10.1111/j.1440-16302008.00774.x [ Links ]
5. Kruger A, Strauss M, Visser M. In-hand manipulation assessment instruments for children: a scoping review. British Journal of Occupational Therapy. 2021,-85(2): 83-98. https://doi.org/10.1177/03080226211037859 [ Links ]
6. Exner CE. Content validity of the in-hand manipulation test. American Journal of Occupational Therapy. 1993;47(6): 505-513. https://doi.org/10.5014/ajot.47.6.505 [ Links ]
7. Case-Smith J. The effects of tactile defensiveness and tactile discrimination on in-hand manipulation. American Journal of Occupational Therapy. 1991;45(9): 811-818. https://doi.org/10.5014/ajot.45.9.811 [ Links ]
8. Jewell K, Humphry R. Reliability of an observation protocol on in-hand manipulation and functional skill development. Physical & Occupational Therapy in Pediatrics. 1994;13(3): 67-82. https://doi.org/10.1080/j006v13n0306 [ Links ]
9. Pehoski C, Henderson A, Tickle-Degnen L. In-hand manipulation in young children: translation movements. American Journal of Occupational Therapy. 1997,-51(9): 719-728. https://doi.org/10.5014/ajot.51.9.719 [ Links ]
10. Bonnier B, Eliasson AC, Krumlinde-Sundholm L. Effects of constraint-induced movement therapy in adolescents with hemiplegie cerebral palsy: a day camp model. Scandinavian Journal of Occupational Therapy. 2006,-13(1): 13-22. https://doi.org/10.1080/11038120510031833 [ Links ]
11. Pont K, Wallen M, Bundy A, Case-Smith J. Reliability and validity of the test of in-hand manipulation in children ages 5 to 6 years. American Journal of Occupational Therapy. 2008;62(4): 384-392. https://doi.org/10.5014/ajot.62.4.384 [ Links ]
12. Chien CW, Brown T, McDonald R. Examining content validity and reliability of the Assessment of Children's Hand Skills (ACHS): a preliminary study. American Journal of Occupational Therapy. 2010;64(5): 756-767. https://doi.org/10.5014/ajot.2010.08158 [ Links ]
13. Visser M, Nel M, de Vries J, Klopper E, Olën K, van Coller J. In-hand manipulation of children aged four and five-years-old: translation, rotation and shift movements, in Bloemfontein. South African Journal of Occupational Therapy. 2014;44(2): 22-28. [ Links ]
14. de Vries L, van Hartingsveldt MJ, Cup EHC, Nijhuis-van der Sanden MWG, de Groot IJM. Evaluating fine motor coordination in children who are not ready for handwriting: which test should we take? Occupational Therapy International. 2015;22(2): 61-70. https://doi.org/10.1002/oti.1385 [ Links ]
15. Raja K, Katyal P, Gupta S. Assessment of in-hand manipulation: tool development. International Journal of Health & Allied Sciences.2016;5(4): 235-246. https://doi.org/10.4103/2278-344X.194092 [ Links ]
16. Kaiser ML, Carrascob CA. Reliability of the modified in-hand manipulation test and the relationship between in-hand manipulation and handwriting. Iranian Rehabilitation Journal. 2019,-17(3): 279-284. https://doi.org/10.32598/irj.17.3.279 [ Links ]
17. Kruger A, Strauss M, Visser M. Assessment of in-hand manipulation by occupational therapists in paediatric practices in South Africa. South African Journal of Occupational Therapy. 2021;51(2): 11-21. http://doi.org/10.17159/2310-3833/2021/vol52n2a3 [ Links ]
18. Visser M, Nel M, du Plessis C, Jacobs S, Joubert A, Muller M, van Soest R. In-hand manipulation (IHM) in children 6 and 7 years of age: a follow-up study. South African Journal of Occupational Therapy. 2016;46(2): 52-58. http://dx.doi.org/10.17159/2310-3833/2016/v46n2a9 [ Links ]
19. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Frontiers in Public Health. 2018;6:149. https://doi.org/10.3389/fpubh.2018.00149 [ Links ]
20. Urbina S. Essentials of Psychological Testing. Hoboken, NJ: John Wiley & Sons Inc.; 2004. [ Links ]
21. Crocker L, Algina J. Introduction to Classical & Modern Test Theory. Mason, OH: Cengage Learning; 2008. [ Links ]
22. DeVellis RF. Scale Development: Theory and Applications. 4th ed. Thousand Oaks, CA: SAGE Publications, Inc.; 2017. [ Links ]
23. Almanasreh E, Moles R, Chen TF. Evaluation of methods used for estimating content validity. Research in Social and Administrative Pharmacy.2019;15(2):214-221. https://doi.org/10.1016/j.sapharm.2018.03.066 [ Links ]
24. Delgado-Rico E, Carretero-Dios H, Ruch W. Content validity evidences in test development: an applied perspective. International Journal of Clinical and Health Psychology. 2012;12(3): 449-460. https://doi.org/10.5167/uzh-64551 [ Links ]
25. Kielhofner G. Developing and evaluating quantitative data collection instruments. In Kielhofner G (editor). Research in Occupational Therapy: Methods of Inquiry for Enhancing Practice. Philadelphia, PA: F.A. Davis Company; 2006:155-176. [ Links ]
26. Polit DF, Beck CT. The content validity index: are you sure you know what's being reported? Critique and recommendations. Research in Nursing & Health. 2006;29(5): 489-497. https://doi.org/10.1002/nur.20147 [ Links ]
27. Burns N, Grove S. Burns and Grove's The Practice of Nursing Research: Appraisal, Synthesis and Generation of Evidence. 9th ed. St. Louis, MO: Elsevier; 2020. [ Links ]
28. Rudman D, Hannah S. An instrument evaluation framework: description and application to assessments of hand function. Journal of Hand Therapy. 1998;11(4): 266-277. https://doi.org/10.1016/S0894-1130(98)80023-9 [ Links ]
29. Benson J, Clark F. A guide for instrument development and validation. American Journal of Occupational Therapy. 1982;36(12): 789-800. https://doi.org/10.5014/ajot.36.12.789 [ Links ]
30. Hambleton RK, Jones RW. Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice. 1993;12(3): 38-47. https://doi.org/10.1111/j.1745-3992.1993.tb00543.x [ Links ]
31. Law M. Measurement in occupational therapy: scientific criteria for evaluation. Canadian Journal of Occupational Therapy. 1987;54(3): 133-138. https://doi.org/10.1177/000841748705400308 [ Links ]
32. Richardson PK. Use of standardised tests in pediatric practice. In: Case-smith J, O'Brien JC, editors. Occupational Therapy for Children. 6th ed. Maryland Heights, MO: Mosby Elsevier Inc. 2010: 216-218. [ Links ]
33. Laver Fawcett AJ. Principles of Assessment and Outcome Measurement for Occupational Therapists and Physiotherapists: Theory, Skills and Application. West Sussex: John Wiley and Sons, Ltd.; 2007. [ Links ]
34. Case-Smith J, Exner CE. Hand function evaluation and intervention. In: Case-smith J, O'Brien JC, Editors. Occupational Therapy for Children and Adolescents. 7th ed. North York, Canada: Elsevier Inc. 2015: 220-257. [ Links ]
35. Pett M, Lackey N, Sullivan J. Making Sense of Factor Analysis.Thousand Oaks, CA: SAGE Publications, Inc.; 2003. [ Links ]
36. Bieber E, Smits-Engelsman BCM, Sgandurra G, Cioni G, Feys H, Guzzetta A, Klingels K Research in developmental disabilities manual function outcome measures in children with developmental coordination disorder (DCD): systematic review. Research in Developmental Disabilities. 2016;55:114-131. https://doi.org/10.1016/j.ridd.2016.03.009 [ Links ]
37. Law M, Cooper B, Strong S, Stewart D, Rigby P, Letts L. The person-environment-occupation model: a transactive approach to occupational performance. Canadian Journal of Occupational Therapy. 1996; 63(1): 9-23. https://doi.org/10.1177/000841749606300103 [ Links ]
38. Skard G, Bundy A. Test of playfulness. In: Parham LD, Fazio L, editors. Play in Occupational Therapy for Children. 2nd ed. St. Louis, MO: Mosby; 2008: 71-93. [ Links ]
39. Elliott JM, Connolly KJ. A classification of manipulative hand movements. Developmental Medicine & Child Neurology. 1984;26(3): 283-296. http://dx.doi.org/10.1111/j.1469-8749.1984.tb04445.x [ Links ]
40. Klymenko G, Liu KPY, Bissett M, Fong KNK, Welage N, Wong RSM. Development and initial validity of the in-hand manipulation assessment. Australian Occupational Therapy Journal. 2018;65(2): 135-145. https://doi.org/10.1111/1440-1630.12447 [ Links ]
41. Kimmerle M, Mainwaring L, Borenstein M. The functional repertoire of the hand and its application to assessment. The American Journal of Occupational Therapy. 2003;57(5): 489-498. https://doi.org/10.5014/ajot.57.5.489 [ Links ]
42. Chien CW, Brown T, McDonald R. A framework of children's hand skills for assessment and intervention. Child: Care, Health and Development. 2009;35(6): 873-884. https://doi.org/10.1111/j.1365-2214.2009.01002.x [ Links ]
43. Pietersen J, Maree K Standardisation of a questionnaire. In: Maree K. First Steps In Research. 2nd ed. Pretoria: Van Schaik Publishers; 2016: 238-247. [ Links ]
44. Lynn MR. Determination and quantification of content validity. Nursing Research. 1986;35(6): 382-386. https://doi.org/10.1097/00006199-198611000-00017 [ Links ]
45. Polit DF, Beck CT. Essentials of Nursing Research: Appraising Evidence for Nursing Practice. Philadelphia, PA: Lippincott Williams & Wilkins; 2006. [ Links ]
46. Haynes SN, Richard DCS, Kubany ES. Content validity in psychological assessment: a functional approach to concepts and methods. Psychological Assessment. 1995;7(3): 238-247. https://doi.org/10.1037/1040-3590.7.3.238 [ Links ]
47. Gilbert GE, Prion S. Making Sense of Methods and Measurement: Lawshe's Content Validity Index. Clinical Simulation In Nursing 2016;12(12): 530-531. DOI: 10.1016/j.ecns.2016.08.002 [ Links ]
48. EvaSys. EvaSys (Version 8.0). EvaSys Survey Outomation Cloud Software. Electric Paper Evaluationssysteme GmbH, Lüneburg; 2019. https://consiliumdcs.com/wp-content/uploads/2020/06/EvaSys_Cloud_Broschuere_V8.0_EN_Consilium-2.pdf [ Links ]
49. SAS Institute Inc. (2019). SAS Software Version 9.4. Cary, NC, USA: SAS Institute Inc. [ Links ]
50. Henderson, S. E, Sugden, D., & Barnett, A. L. Movement Assessment Battery for Children-2.2007. APA PsycTests. https://doi.org/10.1037/t55281-000 [ Links ]
51. Miller, L. J. Miller Function and Participation Scales (M-FUN). 2006. Pearson. [ Links ]
52. Ayres, A. J. Sensory Integration and Praxis Tests (SIPT) Manual. 1989. Western Psychological Services. [ Links ]
53. Bruininks, R. H., & Bruininks, B. D. Bruininks-Oseretsky Test of Motor Proficiency, Second Edition (BOT-2). 2005. Pearson Assessment. [ Links ]
54. Nguyen KVH. Adaptation of the Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III) for Vietnam: A Preliminary Study. Doctoral thesis. New York, NY: St. John's University ProQuest Dissertations Publishing; 2017, ProQuest Number 10609746. Available at: https://www.proquest.com/docview/1904873381?pq-origsite=gscholar&fromopenview=true (accessed 15 February 2023). [ Links ]
55. Lindstrom-Hazel D, VanderVlies Veenstra N. Examining the Purdue Pegboard Test for occupational therapy practice. Open Journal Of Occupational Therapy. 2015;3(3):1-15. https://doi.org/10.15453/2168-6408.1178 [ Links ]
56. Humphry R, Jewell K, Rosenberger RC. Development of in-hand manipulation and relationship with activities. American Journal of Occupational Therapy. 1995,-49(8): 763-771. https://doi.org/10.5014/ajot.49.8.763 [ Links ]
57. Shultz KS, Whitney DJ, Zickar MJ. Measurement Theory in Action: Case Studies and Exercises. 2nd ed. New York, NY: Routledge; 2014. https://dx.doi.org/10.4135/9781452224749 [ Links ]
58. Chien CW, Brown T, McDonald R. Cross-cultural validity of a naturalistic observational assessment of children's hand skills: a study using Rasch analysis. Journal of Rehabilitation Medicine. 2011;43(7): 631-637. https://doi.org/10.2340/16501977-0827 [ Links ]
59. Linkenauger SA, Witt JK, Proffitt DR. Taking a hands-on approach: apparent grasping ability scales the perception of object size. Journal of Experimental Psychology: Human Perception and Performance. 2011,37(5): 1432-1441. https://doi.org/10.1037/a0024248 [ Links ]
60. Andersen Hammond ER, Shay BL, Szturm T. Objective evaluation of fine motor manipulation - a new clinical tool. Journal of Hand Therapy. 2009;22(1): 28-36. https://doi.org/10.1197/j.jht.2008.06.006 [ Links ]
61. Fligge N, Urbanek H, Van der Smagt P. Relation between object properties and EMG during reaching to grasp. Journal of Electromyography and Kinesiology. 2013,-23(2): 402-410. https://doi.org/10.1016/j.jelekin.2012.10.010 [ Links ]
62. Mulligan SE. Occupational Therapy Evaluation for Children: A Pocket Guide. 2nd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2013. [ Links ]
63. SurveyCTO. [Accessed 24 January 2022]. https://www.surveycto.com [ Links ]
64. Peña ED. Lost in translation: methodological considerations in cross-cultural research. Child Development. 2007,78(4): 1255-1264. https://doi.Org/10.1111/j.1467-8624.200701064.x [ Links ]
65. Dunn W. Measurement issues and practices. In: Law M, Baum C, Dunn W, editors. Measuring Occupational Performance: Supporting Best Practices in Occupational Therapy. 2nd ed. Thorofare, NJ: Slack Incorporated; 2005: 21-32. [ Links ]
66. Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine. 2000;25(24): 3186-3191. https://doi.org/10.1097/00007632-200012150-00014 [ Links ]
67. American Occupational Therapy Association. Occupational Therapy Practice Framework: Domain and Process. 4th ed. American Journal of Occupational Therapy. 2020; 74(Suppl 2). https://doi.org/10.5014/ajot.202074s2001 [ Links ]
Correspondence:
Marieta Visser
Email: vissermm@ufs.ac.za
Submitted: 24 January 2023
Reviewed (Round 1): 23 April 2023
Resubmitted: 2 October 2023
Reviewed (Round 2): 19 February 2023
Resubmitted: 29 February 2024
Accepted: 1 March 2024
* Undergraduate student at the time of the study, Department of Occupational Therapy, School of Health and Rehabilitation Sciences, Faculty of Health Sciences, University of the Free State, Bloemfontein, South Africa
Editor: Blanche Pretorius: https://orcid.org/0000-0002-3543-0743
Data availabiity: Upon reasonable request, from corresponding author.
Funding: The authors did not receive any funding for this study and have no conflict of interest to declare