SciELO - Scientific Electronic Library Online

 
 número98Creative inquiry: Exploring teacher researcher self-reflexivity through arts-based self-studyEmbroidery as method: Stitching together narrative becomings and data índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

    Links relacionados

    • Em processo de indexaçãoCitado por Google
    • Em processo de indexaçãoSimilares em Google

    Compartilhar


    Journal of Education (University of KwaZulu-Natal)

    versão On-line ISSN 2520-9868versão impressa ISSN 0259-479X

    Journal of Education  no.98 Durban  2025

    https://doi.org/10.17159/2520-9868/i98a05 

    ARTICLES

     

    Faculty perspectives on the role of ChatGPT-4.0 in higher education assessments

     

     

    Suriamurthee Moonsamy MaistryI; Upasana Gitanjali SinghII

    IUniversity of KwaZulu-Natal. maistrys@ukzn.ac.za. https://orcid.org/0000-0001-9623-0078
    IIUniversity of KwaZulu-Natal, singhup@ukzn.ac.za. https://orcid.org/0000-0002-9943-011X

     

     


    ABSTRACT

    The rapid development of artificial intelligence (AI) has significantly influenced various sectors, including education. A notable advancement is the use of AI language models in assessments. AI language models, such as GPT-4.0, are being trialled to develop innovative tools to enhance assessment processes. The adoption of AI in higher education has been met with enthusiasm and scepticism from university management and academics. As such, it has created much uncertainty and anxiety in higher education. This qualitative study draws on the unified theory of acceptance and use of technology. It explores the question of university academics' perspectives on the multifaceted impact of AI language models on educational assessment. The study's objective was to determine university academics' perspectives on assessment in the context of radical innovations in AI technology to establish proclivities for technology adoption and the extent of instructional responses and practice. A sample of 29 academics, drawn from a South African university, responded to an online open-ended schedule of questions and an open-coding, thematic analysis was applied. The findings revealed that in the main, participants believed that AI could make student assessment processes more efficient and improve the quality of assessments such as by generating and validating multiple-choice questions, emphasising intrinsic motivation, and creating a more effective formative feedback process. The data also revealed participants' pleas for circumspection and responsive institutional policy and cautioned about (un)witting student transgressions and criminality. Given the relative novelty of generative AI in the academic arena and the exploratory nature of this study, this article offers tentative implications for theory and practice.

    Keywords: AI language models, educational assessment, higher education, ethical AI usage


     

     

    Introduction

    Artificial intelligence (AI) technology development has been exponential in the last two decades (Marquis et al., 2024). While AI technology is likely to be disruptive (Girasa, 2020), the advent of ChatGPT, in particular, might be described as having the potential for unimagined disruption in the education sector (Alier et al., 2024). The reaction of higher education institutions could be described as understandably knee-jerk, given the novelty of the ChatGPT innovation and the absence of empirical research on which to base appropriate university and broader higher education policy. ChatGPT, in particular, has received mixed reactions from university academics who range from the technologically savvy (technophiles) to the technologically averse (technophobes; Abdipour et al., 2024). Since its gratis availability, there has been a proliferation of research on various aspects of its application in university programmes and across the spectrum from undergraduate to postgraduate study. Discourses of AI in the higher education community range from inevitability and uncompromising compliance, to notions of loss of authority and control of the teaching and learning enterprise (Bearman et al., 2023). It is becoming increasingly clear that the university teacher's work will likely be impacted in particular ways, especially regarding assessment, which constitutes a key element of the teaching and learning enterprise.

    The research problem that this study responds to relates to the uncertainty that ChatGPT has created around the basic tasks that university academics engage in on a daily basis, namely teaching and research. ChatGPT, as a technological innovation, has created a lacuna in our understanding of the impact that such technology might have on curriculum development, teaching, and assessment. While the tasks of university academics extend well beyond these narrowly defined dimensions, in this study, we focus specifically on university academics' pedagogical and curriculum responses. To this end and for this article, we extract from a broader National Research Foundation-funded project the following research objective: To determine university academics' perspectives on assessment in the context of radical innovations in artificial intelligence technology with the view to establishing proclivities for technology adoption and the extent of instructional responses and practice.

    There is a definite need for empirically based understandings of the academe as they contemplate their respective agendas in a dynamic, rapidly changing technological environment. Ongoing curriculum review and critical reflection on current teaching practice in higher education are better served when strategic curriculum and pedagogic amendments and refinements are based on research-based knowledge. A more specific rationale is that this study addresses a distinct and significant lacuna in the field and speaks directly to a South African higher education and national imperative regarding the agenda of higher education in the context of a dynamic technological environment.

    We draw on a subset of data from a wider National Research Foundation-funded study that focused on curriculum and pedagogical responses of university academics to ChatGPT-4.0, with a particular focus on the implications of AI for assessment. This article focuses specifically on these participants to address the assessment challenges and opportunities that generative AI might present.

     

    A brief overview of extant literature

    Research on AI use in higher education has experienced significant growth, trebling from 2016 to 2022, with United States and Chinese scholars leading the expansion of research in this field (Crompton & Burke, 2023). Some early research offered insights into teacher education (Trust et al., 2023), language teaching (Kohnke et al., 2023), teaching and learning benefits and drawbacks in general (Baidoo-Anu & Owusu Ansah, 2023), and case studies of personal reflections on practice (Yang, 2023). Early preliminary literature reviews have been on the impact of generative AI on education (Chiu, 2023), and on learning motivation (Ali et al., 2023).

    Crompton and Burke's (2023) systematic review suggested that AI has been used for assessment and evaluation purposes and for monitoring student learning. Since the advent of ChatGPT and its exposure to the university sector in 2023, there has been a proliferation of research and scholarship on this "new" development, with attention being drawn to, among other concerns, the ethics of research that apply AI language models (Sun, 2023). A review of the literature on AI and its use in assessments in higher education (González-Calatayud et al., 2021) revealed the United States to be the main site for this scholarship. This study also indicated that while higher education practitioners have mainly experimented with AI as a formative assessment tool, explicit educational theory did not inform these practices.

    An analysis of guidance documents from 135 English-medium educational websites by El Khoury, revealed a diverse discourse on the relationship between AI and assessment, who noted that "differences were closely related to the role of GenAI in assessment or the relationship between GenAI and assessment" (2024, para. 8). While some institutions discounted the impact of generative AI's impact on assessment, others recognised the significance of AI in assessment by choice of specific terms "such as 'AI-enabled assessment,' 'AI-driven assessments,' 'GenAI enhanced,' 'AI-based,' and 'advancing assessment with AI' (El Khoury, 2024, para. 8). The El Khoury study, which was based entirely on institutions in the Global North, also signalled that guidance to university teachers (including guidance on assessment policies and practices), is likely to come in waves given the fluidity and dynamic state of flux of this environment. In higher education institutions where first-wave guidance is yet to make an appearance in any formal fashion, it begs the question as to how university academics traverse the rapidly changing terrain of teaching and assessment. In a recent local study, Tarisayi argued:

    Conceptualizing tools like ChatGPT as amplifying rather than automating academia's technical capacities, with protocols ensuring human oversight, provides the most constructive paradigm. Rather than technologies threatening academics' relevance, an agile, ethical integration strategy upholding rigorous pedagogical, research and assessment standards while expanding inclusion and insight is advocated. (2024, p. 2)

    Arguably the most troubling and disruptive aspect of the most recent generative AI innovations is the issue of student cheating on assessments, given that, as Gamage et al. asserted, this technology:

    Can generate written content and respond to queries at a level that is nearly indistinguishable from a human writer . . . [leading] to concern that students will use ChatGPT's capabilities to cheat on written formative and summative assessments. (2023, p. 1)

    Results from early exploratory studies indicate that creative ways can be devised as a deterrent to academic dishonesty (Akintande, 2024). However, there remains a distinct concern about the threat that generative AI presents to the traditional ways assessments are devised and administered (Cotton et al., 2024). There is a realisation that proactive policy review and adaptation accompanied by ongoing continuing professional development programmes for university teachers are imperative, as are student educative engagements that emphasise the ethicality of AI usage (Cotton et al., 2024). A significant observation of the extant literature is its reliance on positivist ontological and epistemological paradigms, as well as the predominance of quantitative studies. There is thus a need for more nuanced qualitative perspectives on this phenomenon. In the following section, we explain a widely used theoretical framework in technology education research to analyse the acceptance and use of technology in various contexts.

     

    The unified theory of acceptance and use of technology framework as a theoretical foundation

    The unified theory of acceptance and use of technology (UTAUT), introduced by Venkatesh et al. (2016), serves as a foundational theoretical framework for this study, helping to explain the drivers of technology adoption in the context of AI-powered assessment tools like ChatGPT, Grammarly, and GitHub Copilot. As AI-generated tools gain prominence in higher education, UTAUT provides a structured way to analyse factors that influence both student and faculty acceptance and engagement with these technologies, particularly in assessment contexts.

    UTAUT consolidates insights from eight prior technology acceptance models, centring on four primary constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. These constructs have been adapted in recent research to understand and predict the unique factors involved in adopting AI-driven tools for automated assessment and feedback in educational settings.

    1. Performance expectancy: In the context of AI-driven assessment, this refers to users' beliefs that AI tools will enhance learning outcomes by providing timely, customised feedback, making assessments more meaningful and efficient. Given AI's ability to generate detailed responses and adapt feedback, recent studies (Musyaffi et al., 2024) suggest that educators and students view these tools as enhancing academic performance, thereby increasing their adoption.

    2. Effort expectancy: The perceived ease of use of AI tools directly impacts their adoption. Studies have revealed that tools like ChatGPT and Grammarly are designed with user-friendly interfaces, making complex AI functionalities accessible with minimal technical effort. This ease of use is essential for acceptance among educators and students with varying levels of technological expertise (Akinnuwesi et al., 2022).

    3. Social influence: This factor captures the role of peer and institutional endorsement in encouraging technology adoption. In higher education, as more institutions and faculty members endorse AI-driven tools for assessments, students are likely to follow suit, perceiving these tools as legitimate and beneficial for academic success. Social influence thus becomes pivotal, especially as AI technologies like ChatGPT-4.0 and Copilot gain visibility and acceptance within academic communities.

    4. Facilitating conditions: The presence of technical infrastructure, training, and resources supporting AI tool use significantly affects adoption rates. For instance, access to digital resources, institutional support for AI integration, and adequate training on AI ethics and usage are essential for fostering acceptance. Abbad (2021) emphasised the need for supportive infrastructure to facilitate smooth AI adoption in educational environments.

    UTAUT also identifies demographic and contextual variables-gender, age, experience, and voluntariness of use-as moderating factors in technology adoption. While the current study does not focus on these variables in depth, they are noted as potentially influential for understanding variations in acceptance across different student and faculty groups warranting further research.

    UTAUT is particularly relevant for this study due to its ability to encapsulate the complex interplay between human factors and technology acceptance. As AI tools increasingly automate assessment tasks, it becomes essential to assess how these tools are received by users within the unique context of education. Given the rapid advancements in generative AI and their adoption in educational settings, UTAUT provides a structured approach to examining both the advantages and limitations associated with AI-generated tools. For instance, while tools like ChatGPT and Copilot offer benefits in terms of scalability and responsiveness in assessment, they also present challenges such as ethical considerations, potential biases, and the risk of over-reliance on technology for feedback.

    The use of UTAUT in this study foregrounds the factors influencing user acceptance of AI tools and highlights the impact of these tools on educational practices, thereby enriching the understanding of how AI can reshape assessment and feedback in higher education.

     

    Research methodology

    This study adopted a qualitative approach rooted in an interpretive paradigm to capture rich, context-specific insights into university academics' responses to generative AI's impact on curriculum and pedagogy. The interpretive framework assumes that reality is subjective, complex, and context dependent (Cohen et al., 2017). Ethical clearance was obtained from the University of KwaZulu-Natal (UKZN) under protocol number HSSREC/00005732/2023.

    Purposive sampling was employed to select a targeted group of academics from the College of Law and Management Studies and the College of Humanities at UKZN. These colleges were chosen because they represent a range of disciplinary perspectives on curriculum and pedagogy, particularly with regard to the ethical and societal impacts of generative AI. Participants were selected based on their teaching roles, familiarity with curriculum design, and involvement with student assessment because these factors were deemed essential for capturing relevant insights into the integration of AI into educational practices.

    Data were collected through an online question schedule administered via Google Forms. The choice of Google Forms allowed for accessible and convenient participation, ensuring data security while preserving anonymity. The online question schedule link was shared directly with participants via university email, providing easy access while allowing participants to respond at their convenience within a 3-week period. The online question schedule comprised 29 questions: nine focused on demographic information, and 20 aimed to elicit reflective, qualitative responses regarding the impact of AI-generated tools on curriculum and pedagogy.

    The expected completion time of approximately 35 minutes allowed participants to provide detailed responses without overwhelming them-balancing depth with accessibility. Prior to the main data collection, the questionnaire was piloted with two academics to refine question clarity, flow, and relevance. This pilot stage helped ensure that prompts were clear and encouraged meaningful reflection aligned with the study's aims.

    The responses of 29 participants were recorded anonymously and exported from Google Forms into Excel for organisation and preparation before being imported into NVivo (Version 12). NVivo was selected for its robust capacity to manage and analyse large amounts of qualitative data systematically. Using NVivo allowed for efficient coding and theme identification, enabling a structured and reproducible analysis process that could ensure consistency and depth in theme development.

    Coding was conducted iteratively, utilising both deductive and inductive approaches. The deductive approach was informed by established literature on AI in education, while inductive coding allowed for emergent themes specific to participants' experiences and perspectives. This dual approach enabled the identification of both anticipated and novel themes, enhancing the comprehensiveness of the analysis (Clarke et al., 2015).

    To ensure trustworthiness, data were analysed through iterative coding cycles, and themes were refined collaboratively to reduce bias. Following initial coding, participant validation was sought by providing participants with extracts of their responses to verify accuracy and ensure that interpretations aligned with their intended meaning. This participant validation process reinforced the credibility of the findings by involving participants in the interpretive process, thereby enhancing the study's rigor and reliability.

    In NVivo, thematic analysis was carried out by organising coded data into broader categories, and patterns across participants' responses were examined to formulate themes. This computer-assisted analysis facilitated the handling of substantial qualitative data, allowing for a more efficient and systematic theme development process compared to manual coding. NVivo's search, coding, and categorisation tools supported the refinement of themes and subthemes, ensuring a structured, transparent analysis that bolstered the interpretive depth of the study. The methodology employed aligns with the study's aim to explore nuanced academic responses to AI in education, producing data-rich insights to inform future research and practice.

     

    Findings

    Participants in the study shared varied perspectives on using AI language models, particularly ChatGPT-4.0, in student assessments. Four key themes emerged, highlighting AI's potential benefits to the assessment process, and are discussed in turn below.

    AI language models could enhance student assessments

    Participants noted that AI could enhance assessment by increasing its pace, introducing diverse assessment forms, and improving the overall quality of assessments. One prevalent use of AI language models that participants indicated that they engage with, is in the generation and validation of multiple-choice questions. The following participant excerpt captures the multifaceted advantages that ChatGPT-4.0 might present for assessments:

    I use ChatGPT-4.0 to generate a response to the assessment; then, I set criteria to critique and provide feedback to improve. And redesign my assessments that require students to reflect on their own experiences, emphasising the process of learning and developing ideas over the final product. I use it to emphasise intrinsic motivation, articulating my students' expectations of the assessment, which is different from focusing on extrinsic motivation, where students can end up focusing on grades over learning, which may mean they are more likely to cheat.

    In the data excerpt above, a participant describes how generative AI might be used to foreground the learning process through reflection and intrinsically driven learning, instead of a focus on quantifying an outcome of the learning process or the score achieved in an assessment task. Some participants, however, expressed reservations about the benefits of generative AI.

    Three participants reported not using ChatGPT-4.0 in assessments, highlighting a lack of familiarity with its potential use in assessments or comfort with the technology. Another participant pointed out the time-consuming nature of crafting effective prompts to generate appropriate questions and answers. This respondent emphasised that certain questions, particularly those for formative evaluation, demand a level of cognitive complexity that ChatGPT-4.0 cannot yet support, making the generative AI less relevant to course assessment expectations.

    Participants identified specific AI language models they used to support student assessments. Notably, four participants detailed their experiences citing the use of Azure OpenAI. In contrast, some participants had begun to use ChatGPT-4.0 to draft and structure assessments and evaluate the quality of written assessments. A participant also noted "It does occasionally produce nonsensical content for these purposes." Thus, there appeared to be some awareness of the limitations of generative AI content creations.

    Tolerance for AI similarity levels in students' work

    The study also explored participants' views on the allowable AI similarity levels in students' work (see Table 1). Responses varied widely, with about a third of participants (n = 9) suggesting an acceptable similarity level of 10% or less. Another group (n = 5) supported a 15-20% similarity level, while a few participants (n = 3) indicated their zero tolerance for any level of similarity. Some participants believed the allowable similarity level should align with university policy while supporting the idea that each case of suspected plagiarism be managed on a case-by-case basis.

    Some participants suggested the need for a nuanced approach to understanding and acting on similarity reports, especially in the absence of clear university policy guidelines on how to manage this issue:

    Examples from other universities I've seen involve looking at the entire Turnitin report and using one's own judgment (e.g. explanations in broad strokes, short sentences with simple structure, and lack of references are usually markers of AI-generated content).

    The similarity index says very little. There is a need to see what is similar. I look at that before asking students to address the similarities.

    A circumspect approach to the use of generative AI

    The participant excerpt below reveals the tentativeness with which generative AI is being considered.

    This is what I suggested to my class when this first came up. While I absolutely support your engaging with all the new technology out there, please take note of this warning: I am already hearing of instances of students getting zero grades and being accused of plagiarism due to using AI. AI can be a wonderful tool, but you need to be very aware of its shortcomings. The biggest issue is that it is not a fact checker (it is a language model, not a search engine-you need to do some careful research to understand what it is and is not capable of), and so you must be careful when using anything generated there. The biggest issue in terms of legal consequences is that these AI models are known to produce fake references-which is plagiarism. Please see my warning in the red above under reference. I think these models can be a great way of getting your thoughts in order, and then overcoming writer's block . . . however . . . it is vital and necessary that you then fact check everything and edit the output into your own words, and most importantly, triple check the references. This is going to become a serious issue in the next few months, so please protect yourselves by inserting page numbers into your in-text references in every possible case that you can-even if you remove them for the final version. This way, you will have evidence of having found that material yourself. This is going to be a vital skill in the coming months and years-how do you show that the work you have written is yours? Pay attention to that, as it is easy to do by solid critical analysis, and carefully using page numbers to further support the statements that are made in your writing.

    The level of awareness of the risks that generative AI presents for students who exploit its use is evidenced above. There is a call for students to be cautioned about risks such as AI-generated content inaccuracy. Participants appeared to acknowledge that AI language models might have significant potential in academic assessments, but they also voiced substantial concerns. A primary worry related to the accuracy and reliability of generative AI, as three participants highlighted the issue of "hallucination," where AI generates incorrect or non-existent information

    Of particular concern was students' inadvertent criminality and the charge of plagiarism. Generative AI has the ability to distil ideas from extant literature, and students might then naively present such knowledge claims as their own or original thoughts. This practice might well incriminate them. The tentativeness and somewhat contradictory reaction of some participants is reflected below.

    The potential of AI cannot be glossed over, and ultimately, we will need to embrace it as an oncoming wave of change. However, the use of AI must be curtailed by effective policy, and academics and students must be trained concerning its potentials and pitfalls.

    This mixed sentiment underscores the anxiety that generative AI has created. There appears to be a recognition of its potential and the enormity of the change or disruption it might cause. The data also evidence the plea for continuing professional development that this technological innovation has necessitated. There is also the idea that university policy could be developed in a fashion that might curtail the use of generative AI. This particular utterance highlights the level of uncertainty and knowledgeability of some participants. What was clear and to be expected was the very diverse proclivities for, and knowledge of, generative AI by university academics in this sample, with some participants suggesting that AI language models are neutral tools

    Monitoring the ethical use of AI was emphasised:

    Lecturers and supervisors need to be at the forefront of new technologies and must consider ethical considerations, as students will most likely be using AI in their submissions for assessment. More workshops and seminars will be a valuable aid to lecturers going forward as many are still at the experimental stage with AI.

    The issue of ethicality that participants raised concerns dishonest practices as it relates to students presenting artefacts for assessment that are not the products of their own efforts. This particular concern extends to university academics' abilities to discern the difference between students' authentic efforts and machine-generated content.

    To address these challenges and leverage AI's opportunities, participants recommended increasing awareness and training through discussions and seminars. A participant noted:

    I think that this is a hot topic of discussion that needs to be conducted from a nuanced perspective and not a "one size fits all" strategy. To engage with this topic in an intensive and meaningful manner, the subject domain should be given priority and not the topic of AI.

    The above excerpt is another somewhat incoherent response from a participant who makes mention of what should take priority in staff development. While there was a plea for institutional intervention as it relates to policy development, there was also a recognition of institutional inertia:

    Institutions are by nature slow to move and catch up with the latest developments. The disruptive nature of AI is such that there is a need to move with speed, to enhance and at the same time protect our academic offering.

    Some participants presented a more optimistic perspective, stressing the importance of universities being proactive:

    The University must be ahead of the curve here with assessments and usage and not ban the use of AI.

    Some participants expressed a need for more experience with generative AI before forming opinions, highlighting a gap in familiarity. Another expressed significant concern about the time required to learn and integrate new AI technologies, expressing that,

    My biggest worry is that none of us seem to have the time to engage with this huge growth in tech fully. University policy and procedure is going to lag years behind the development of these things, and how academics are best to engage with it pedagogically for their students and their research will be so hit-and-miss as a result.

    Linking these insights to the UTAUT framework, the perceived benefits and challenges of AI in assessments align with performance expectancy and effort expectancy constructs. Social influence is evident in the call for institutional policies and training to foster ethical and effective AI use. Finally, facilitating conditions are crucial, as participants emphasised the need for university support through policy development, training programmes, and resources to successfully integrate AI in academic settings.

    Rudimentary practical guidance on the use of AI language models in assessments

    Participants suggested two primary guidelines to help students use AI in their assessments: avoiding plagiarism and properly acknowledging AI sources. The first guideline focuses on ensuring that students accurately acknowledge the sources they use to avoid plagiarism. Participants emphasised the importance of teaching students "how to reference AI and how to acknowledge content from AI."

    Other participants suggested that students use AI as a starting point in their work:

    Use it to get an initial "ballpark idea"-then throw it away-similar to rapid prototyping and develop your solution from "scratch."

    Some participants indicated that their only guideline for students was to warn them against the dangers of using AI. Some participants mentioned adding a warning against AI use that should be clearly scripted in course outlines and course guides. Other participants also cautioned against using AI to generate responses and suggested that academics apply "existing policies on cheating and plagiarism" in the absence of AI-specific guidelines.

    While some participants suggested ideas for the best use of AI, others prohibited were of the view that it should be prohibited entirely, especially in summative assessments. This latter position was associated with experiences of plagiarism in student submissions during the hard lockdown brought on by COVID-19. Although they had advocated for strict policy guidelines, no specific options were provided. Limited familiarity with AI platforms was another reason cited for the lack of guidelines.

    A participant unashamedly noted:

    I have not started using this tool. But I think with proper training, I will be able to follow necessary guidelines.

    A participant who had explored generative AI's capabilities highlighted that while they were remarkable, in certain disciplinary contexts, it had particular limitations:

    Its capabilities are quite impressive in certain contexts. From a software development perspective, it is quite limited. I have explored its effectiveness in solving software-related problems, and the big impediment to its success is its inability to manage issues related to interface design. While it is pretty good for solving algorithmic problems and certainly does contribute from a computer programming/problem-solving perspective, it does not handle the human interface issues all that well. These are related to menu design and system security. Most modern software systems are tightly integrated into the interface, and the interface design is unique to the requirements of the specific system being built. So, interface design is not consistent across all platforms and all domains of development, and this is where AI tools will struggle to identify human requirements/preferences.

     

    Discussion of findings

    The UTAUT model (Venkatesh et al., 2016) is helpful in understanding academic staff reactions to the somewhat dramatic arrival of generative AI models and the unprecedented pace at which generative AI has moved in the last two years. Although the extent of the potential of generative AI is yet to be discovered, performance expectancy, as it relates to the potential for a positive impact on academic work, appeared to be a prevalent sentiment amongst the sample of participants. Particular benefits cited included enhanced assessment practices, a focus on learning as a process, and the application of AI tools more intensively as instruments for better quality formative feedback. These are distinct potential leverage points that are worthy of taking up as continuing professional development foci for higher education institutions. The receptiveness displayed by participants to generative AI as a sophisticated technology that might enhance personal teaching profiles despite several participants' somewhat limited knowledge of this technology indicates what Venkatesh et al. (2016) framed as the social influence of technology. What is evidenced is that generative AI has appeal and that university academics want to present as familiar with this technology and open to exploring its potential for their practice. The idea of not coming across as technology-averse is compelling-presenting a disposition of not being obstructionist but receptive is powerful in accruing social capital and improving social receptiveness in a higher education space where individualism has become a growing ethos. Thus, this leverage point might well be harnessed by perceptive leadership and those with foresight as to generative AI's immediate and long-term implications for the teaching and learning enterprise.

    In a context where generative AI's functionality radically challenges university academics' traditional skill sets and control over teaching and assessment protocols, it is not unusual to hear imprecations for perspicacity and circumspection in adopting AI technology. Concerns emerged about unwitting incriminatory practices by students and legal processes that might ensue from having to follow through with criminal prosecution of transgressing students. The effort expectancy (Venkatesh et al., 2016) might be a deterrent factor as it relates to fast tracking the acquisition of new competence sets and becoming familiar with new policies and procedures that the new milieu might engineer. What is also evident is that the inadequacy and inflexibility of existing institutional policy frameworks are likely to complicate and complexify the work of university academics. In essence, Venkatesh et al.'s facilitating conditions for technology adoption, including comprehensive policy development, is an area that is in need of urgent research.

    The findings of the study align with the constructs of the UTAUT framework. The advantages highlighted by participants related to performance expectancy, where users perceived that the technology would enhance their job performance. The ease or difficulty in using ChatGPT 3.5 or 4.0 aligns with effort expectancy, where the time-consuming nature of prompt creation can be a barrier. Social influence was evident given that some participants have adopted the technology based on its benefits, while others remain hesitant. Finally, facilitating conditions, such as the availability of support and resources to use AI in assessments effectively, also played a crucial role in its adoption and usage

     

    Conclusion

    This article reported on an exploratory study in the context of radical change in the teaching context in higher education. And, we acknowledge its methodological limitations regarding the sample size and data collection methods. However, it lays the groundwork for further inquiry into best practices related to curriculum adaptation, teaching, and assessment practices.

    The objectives of this study focused on university academics' perspectives on assessment in the context of radical innovations in AI technology (ChatGPT 4.0 in the current case) with a view to establishing proclivities for technology adoption and the extent of instructional responses and practice.

    The findings reveal a complex landscape of perspectives on AI use in academic assessments, with much uncertainty about AI's specific applicability. There is a belief that AI could make student assessment processes more efficient and improve the quality of assessments by emphasising intrinsic motivation and creating a more effective formative feedback process.

    While there is recognition of AI's potential, there are significant concerns about its reliability, ethical implications, and the need for robust institutional policies and training. Note that university academics' recognition of AI's potential might be regarded as somewhat ungrounded and intuitive, as opposed to concrete reflections from personal teaching practice. While technophile academics were keen to exploit the potential they perceived, and zealously embraced the affordances of ChatGPT 4.0 in this instance, technophobes in the sample, while acknowledging AI's imminent possibilities, reported somewhat superficial encounters with ChatGPT 4.0.

    A distinct challenge in the context of teaching at a higher education institution is gauging the veracity of students' submissions of work to be assessed. It remains a serious cause for concern and angst that students might not fully understand the risks associated with the unethical use of ChatGPT 4.0. What has become evident is that even the most astute academics (in whatever field) were urging the development of policy and practice guidelines in this new arena.

    Aligning these insights with the UTAUT framework provides a structured understanding of the factors influencing AI adoption in academia. To harness the benefits of AI while mitigating its risks, universities must develop comprehensive policies, offer training programmes, and provide resources to support staff and students to effectively integrate AI tools into their academic practices.

     

    References

    Abbad, M. M. (2021). Using the UTAUT model to understand students' usage of e-learning systems in developing countries. Education and Information Technologies, 26(6), 7205-7224. http://dx.doi.org/10.1007/s10639-021-10573-5        [ Links ]

    Abdipour, N., Rakhshanderou, S., & Ghaffari, M. (2024). Older people's technophilia and technophobia: Methodological research on the psychometric evaluation of the TechPH Scale among an Iranian population. https://doi.org/10.21203/rs.3.rs-4024096/v1

    Akinnuwesi, B. A., Uzoka, F.-M. E., Fashoto, S. G., Mbunge, E., Odumabo, A., Amusa, O. O., Okpeku, M., Owolabi, O. (2022). A modified UTAUT model for the acceptance and use of digital technology for tackling COVID-19. Sustainable Operations and Computers, 3, 118-135. https://doi.org/10.1016/j.susoc.2021.12.001        [ Links ]

    Akintande, O. J. (2024). Artificial versus natural intelligence: Overcoming students' cheating likelihood with artificial intelligence tools during virtual assessment. Future in Educational Research, 2(2). http://dx.doi.org/10.1002/fer3.33        [ Links ]

    Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. (2023). Impact of ChatGPT on learning motivation: Teachers and students' voices. Journal of English Studies in Arabia Felix, 2(1), 41-49. http://dx.doi.org/10.56540/jesaf.v2i1.51        [ Links ]

    Alier, M., García-Peñalvo, F., & Camba, J. D. (2024). Generative artificial intelligence in education: From deceptive to disruptive. International Journal of Interactive Multimedia and Artificial Intelligence, 8(5). http://dx.doi.org/10.9781/ijimai.2024.02.011        [ Links ]

    Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1). http://dx.doi.org/10.61969/jai.1337500        [ Links ]

    Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86(2), 369-385. http://dx.doi.org/10.1007/s10734-022-00937-2        [ Links ]

    Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187-6203. https://doi.org/10.1080/10494820.2023.2253861        [ Links ]

    Clarke, V., Braun, V., & Hayfield, N. (2015). Thematic analysis. In J. A. Smith (Ed.), Qualitative psychology: A practical guide to research methods (pp. 222-248. SAGE.

    Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education (8th ed.). Routledge.

    Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228-239. http://dx.doi.org/10.1080/14703297.2023.2190148        [ Links ]

    Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1). http://dx.doi.org/10.1186/s41239-023-00392-8        [ Links ]

    El Khoury, E. (2024). Mapping the response to AI and its impact on assessment redesign through document analysis. The Assessment Review, 5(1). https://assessatcuny.commons.gc.cuny.edu/2024/03/        [ Links ]

    Gamage, K. A., Dehideniya, S. C., Xu, Z., & Tang, X. (2023). ChatGPT and higher education assessments: More opportunities than concerns? Journal of Applied Learning and Teaching, 6(2). http://dx.doi.org/10.37074/jalt.2023.6.2.32        [ Links ]

    Girasa, R. (2020). Artificial intelligence as a disruptive technology: Economic transformation and government regulation. Palgrave Macmillan.

    González-Calatayud, V., Prendes-Espinosa, P., & Roig-Vila, R. (2021). Artificial intelligence for student assessment: A systematic review. Applied Sciences, 11(6), Article 5467. http://dx.doi.org/10.3390/app11125467        [ Links ]

    Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(3). http://dx.doi.org/10.1177/00336882231162868        [ Links ]

    Marquis, Y., Oladoyinbo, T. O., Olabanji, S. O., Olaniyi, O. O., & Ajayi, S. A. (2024). Proliferation of AI tools: A multifaceted evaluation of user perceptions and emerging trend. Asian Journal of Advanced Research and Reports, 18(1), 30-55. http://dx.doi.org/10.9734/AJARR/2024/v18i1596        [ Links ]

    Musyaffi, A. M., Adha, M. A., Mukhibad, H., & Oli, M. C. (2024). Improving students' openness to artificial intelligence through risk awareness and digital literacy: Evidence form a developing country. Social Sciences & Humanities Open, 10, 101168. https://doi.org/10.1016/j.ssaho.2024.101168        [ Links ]

    Sun, D.-W. (2023). Urgent need for ethical policies to prevent the proliferation of AI-generated texts in scientific papers. Food andBioprocess Technology, 16(36), 941-943. http://dx.doi.org/10.1007/s11947-023-03046-9        [ Links ]

    Tarisayi, K. S. (2024). ChatGPT use in universities in South Africa through a socio-technical lens. Cogent Education, 11(1), 2295654. http://dx.doi.org/10.1080/2331186X.2023.2295654        [ Links ]

    Trust, T., Whalen, J., & Mouza, C. (2023). Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23. https://citejournal.org/volume-23/issue-1-23/editorial/editorial-chatgpt-challenges-opportunities-and-implications-for-teacher-education/        [ Links ]

    Venkatesh, V., Thong, J. Y., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 17(5), 328-376. http://dx.doi.org/10.17705/1jais.00428        [ Links ]

    Yang, H. (2023). How I use ChatGPT responsibly in my teaching. Nature. http://dx.doi.org/10.1038/d41586-023-01026-9

     

     

    Received: 31 July 2024
    Accepted: 23 November 2025