SciELO - Scientific Electronic Library Online

 
 número98How calls for research can awaken self-reflexivity and latent interests in scholarly inquiryHaving fun seriously matters: A visual arts-based narrative of methodological inventiveness índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

    Links relacionados

    • Em processo de indexaçãoCitado por Google
    • Em processo de indexaçãoSimilares em Google

    Compartilhar


    Journal of Education (University of KwaZulu-Natal)

    versão On-line ISSN 2520-9868versão impressa ISSN 0259-479X

    Journal of Education  no.98 Durban  2025

    https://doi.org/10.17159/2520-9868/i98a02 

    ARTICLES

     

    Artificial intelligence in education: Considerations for South African schooling

     

     

    Samira CrossI; Jennifer FeldmanII

    IEducation Policy Studies Department, Faculty of Education, Stellenbosch University, Stellenbosch, South Africa. srcross@sun.ac.za. https://orcid.org/0009-0001-3960-9812
    IIEducation Policy Studies Department, Faculty of Education, Stellenbosch University, Stellenbosch, Suth Africa. jfeldman@sun.ac.za. https://orcid.org/0000-0002-9367-0980

     

     


    ABSTRACT

    Artificial intelligence in education (AIEd) has been an established research field for over 30 years. However, the recent introduction of free, readily available AI in the form of large language model-based chatbots, such as ChatGPT, has sparked controversy and change, specifically within the field of education. In response to these developments, this article presents a discussion on AIEd. The article starts by presenting a broad overview of AIEd, followed by ways in which researchers foresee AIEd being used in schooling. Potential harm in relation to the use of AIEd and critiques of current AIEd usage is also discussed, followed by a discussion on AIEd and its potential impact on South African schooling. The article concludes by noting that it is near impossible not to adopt or engage with new digital developments, however, educators need to equip both themselves and their students on the complexities involved in engaging in aspects of AIEd, specifically taking careful note of privacy and confidentiality laws.

    Keywords: technology in schools, artificial intelligence in education (AIEd), generative AI, South African schools


     

     

    Introduction

    While artificial intelligence (AI) in education has been an established research field for over 30 years, the recent rapid developments have catapulted interest in the field and placed a spotlight on the possible future roles of AI in education (Hwang et al., 2020). The recent introduction and virality of free, readily available AI in the form of chatbots, such as ChatGPT, has sparked controversy and change in educational environments (Halaweh, 2023).

    ChatGPT, which was made available to the public on 30 November 2022, can respond to input and produce human-like text on a wide range of topics, and can be distinguished from prior chatbots through its recall of earlier input and output for continuous dialogue rather than the generation of scripted, predictable responses. From there, the landscape evolved rapidly. While ChatGPT initially held a prominent position, it was soon joined by various other chatbots. These are numerous products and services that use OpenAI's generative pre-trained transformer (GPT) model as a cornerstone (such as Microsoft Bing and ChatPDF).

    Simultaneously, other companies ventured into developing and releasing their own AI models integrated into their interfaces (such as Claude by AnthropicAI, Google Bard-now Google Gemini-which is powered by Gemini, and HIX.AI, a Singapore-based company) that rose in popularity and developed niche selling points and areas of expertise, often tailored to specific use cases such as assistance with writing or coding. OpenAI later released ChatGPT Plus, a paid version of the interface, allowing users to (amongst other things) access faster response times and priority access, a newer version of the GPT model, as well as the functionality to train their own GPT. The company continued updating the model, releasing a limited version of GPT-4o to free users in early 2024, with functionality that outdid the initial ChatGPT Plus, with paying users receiving even more advanced functionality.

    The presence of this new generation of chatbots has opened the conversation around the use of generative AI (GenAI) in teaching and learning. At the time of ChatGPT's release, educational institutions in the Global North were in the middle of their academic year and were confronted immediately with challenges of how to manage students' assessments in relation to the availability of chatbots. However, educational institutions in the Global South had completed their academic year and had time to consider guidelines and possible responses to managing chatbots in assessments for the new academic year.

    In response to these developments, this article presents a crucial discussion on AI in education (AIEd), specifically focusing on AI in schools. The article starts by presenting a broad overview of AIEd, followed by ways in which researchers foresee AIEd being used. Potential harm in relation to the use of AIEd and critiques of current AIEd usage is also discussed. This is followed by a discussion on AIEd and its potential impact on South African education. The article concludes by noting that it is near impossible not to adopt or engage with new digital developments, however, educators need to equip both themselves and their students on the complexities involved in engaging in aspects of AIEd, specifically taking careful note of privacy and confidentiality laws.

     

    An overview of AI in education

    The idea of machines being able to perform processes that could be considered intelligent was first popularised in a publication by Alan Turing in 1950 (Cope et al., 2020). From this, the term "artificial intelligence" was coined in a 1955 conference, being defined as when a machine behaves in ways that could be considered intelligent if a human were to perform the behaviour (Baksa et al., 2024; Cope et al., 2020). Early application of AI fell into the category of machine learning, a subset of AI that uses a single layer of statistical relationships (i.e. observable patterns) to make predictions and decisions. From there, new ways of modelling and training AI systems also began to emerge, such as deep learning and neural networks, which utilise multi-level statistical relationships (i.e. patterns in patterns; Cope et al., 2020).

    The definition of AI remains contentious, but for the purpose of this article, AI is considered to be any computer system capable of performing complex tasks and making decisions-so, exhibiting an ability to learn and solve problems at a level comparable to or surpassing human capability (Chen et al., 2020; Goralski & Tan, 2020). Currently, GenAI-the type of AI that reignited its prominence in mainstream conversations-is defined by Lee et al. (2024) as an AI capable of producing new content on command. Chan and Hu (2023) expanded on this definition, citing it as any group of machine learning algorithms which can learn patterns and generate new data (such as text and code, sound, images and video) that mimics the pre-existing datasets it learnt from. It should be noted that "learning" in relation to computer systems is a metaphor. At present, AI learning is less like human learning and more akin to training. While more complex in actuality, AI is essentially created by training a system on vast amounts of data and feedback that is continually and rapidly processed through adaptive algorithms to identify and recognise certain patterns. The data input on which the AI is trained determines the output of subsequent data sets. Thus, despite its complexity, AI is only a set of bounded, mathematical rules (Selwyn, 2022) with the quality of output depending heavily on the accuracy and contextual relevance of input data.

    In education, AI technologies are coined "AIEd" when implemented with the intent to increase the efficiency and efficacy of schooling (Eynon, 2024). Currently, the success of AIEd tools is evaluated based on the ability to better student achievement, including meeting policy-prescribed learning outcomes. This technology can be used to support or enact teaching and learning processes, as well as educational decision-making processes. There are various ways in which AIEd can be applied towards these goals. For example, various forms of AIEd systems could monitor the behaviour and actions of both teachers and learners and using this data, build predictive models, identify trends in learning, assessing and provide feedback on learners' performance, as well as provide recommendations for teaching and learning materials based on learners' needs and learning preferences, to name a few (Triansyah et al., 2023).

    Most AIEd research globally originates in contexts where learners have sufficient access to both devices and the internet. For example, China currently leads research into the use of AIEd products owing to a large student population, exacting standards for education, and a teacher shortage, with the United States of America following closely behind (Alam, 2021). Countries aim to secure power and remain globally competitive through academic results (the production of economically viable adults) by tapping into a lucrative EdTech market to support student outputs (Nemorin et al., 2022).

     

    Some potential uses of AIEd

    Effective and efficient teaching & learning

    The bulk of research around AIEd centres around assessment and the enhancement of teaching and learning (Alam, 2021) with the aim of maximising the efficiency in the delivery of curricula in line with the capability of learners, and efficacy, which relates to the rate of knowledge uptake, retention, and achievement in standardised assessments (Chen et al., 2020; Perrotta & Selwyn, 2019). Within the bounds of efficiency and efficacy, lies an attempt to expand the distribution of and access to education to all children, to which it is possible that AIEd may offer a variety of solutions, a few of which we have highlighted below.

    Chen et al. (2020) and Gocen and Aydemir (2021) suggested that AIEd could allow learners with disabilities to access education. For instance, virtual and augmented reality can be harnessed for physically disabled learners (Hsu, 2020), or the use of optical character recognition can be used for text-to-Braille conversion. Further, AIEd could allow learners in remote areas (for example, where teachers are not available), and those out of school (such as learners hospitalised for extended periods of time or those in war-torn or disaster areas) to access quality education (Chen et al., 2020). AIEd might also be useful in areas with teacher or knowledge shortages (Zhai et al., 2021). For example, intelligent tutoring software could supplement or provide curriculum content when underqualified teachers teach subjects they are not equipped for.

    In addition, researchers predict that the development of AI will impact the education system and how teaching and learning take place (Alam, 2021). This will involve interaction between AI systems, educators, and students and will require acquainting all learners with the implicit and explicit standards governing AI usage (Bearman & Ajjawi, 2023). There are a few formats in which AI can be incorporated into classrooms. Firstly, this could be AI-led, with learners as passive recipients of pre-selected knowledge, who merely execute learning activities. Secondly, there is AI-supported integration, where learners are collaborators in learner-centred, personalised learning. Thirdly, the incorporation of AI in the classroom could be learner led, encouraging agency and knowledge augmentation. This requires complex collaboration between learners, teachers, and technology (Sanusi et al., 2022). Furthermore, considering the case of complete AIEd classrooms (where learners learn exclusively through AIEd programmes) in the future, teachers' roles may change to that of managers, motivators, and mentors, focusing on teaching social skills and providing life guidance rather than content (Gocen & Aydemir, 2021; Sellar & Hogan, 2019).

    It is not only AI specifically developed for education that currently influences schooling. With freely available AI (such as ChatGPT, DALL-E 2, Grammarly Pro, Google Translate, as well as others) becoming commonplace in some educational contexts, teachers need to empower learners with appropriate, flexible skillsets and knowledge about how to use AI without becoming overly reliant on it (Chen et al., 2020; Sanusi et al., 2022; Zhai et al., 2021). Teaching learners the appropriate skills for a world of AI technology is of vital importance, especially in developing nations such as South Africa. Failure to do so could increase both social and economic inequality (Hart, 2023). In addition, teachers must discuss the ethical use of AI (involving consideration of what AI could mean for society as a whole) within the teaching and learning context. These aspects may require additional training and professional development for educators, which should be prioritised going forward.

    Automation of teaching & learning

    Considering AI in relation to the teaching and learning process, it is possible that some aspects of teachers' workload could be automated and delegated to AI (Chen et al., 2020; Gocen & Aydemir, 2021). The automation of administration, assessment and feedback, and preparation could allow teachers to spend more contact time with learners, which may facilitate improved learning outcomes. For example, AI could be used to set and mark basic quizzes, and AI tools such as benchmarking and rubrics to assess learners' work. AIEd could also be used to complete administrative tasks of various role players in education. For example, facial recognition can automate registration and report on learner attendance, and body language and behavioural monitoring have made automation of assessment invigilation possible (Gocen & Aydemir, 2021; Ouyang & Jiao, 2021).

    Another example of AIEd automation involves intelligent tutoring systems (ITS) and assessment systems, such as learning analytics, embedded therein. Learning analytics-the identification of patterns in data and recommendation of methods of meeting learner needs- could potentially be useful to teachers in identifying and assisting learners who are struggling with various aspects of the work or in providing support opportunities for learners with learning difficulties (Alam, 2021; Triansyah et al., 2023). In addition, researchers (Chen et al., 2020; Goralski & Tan, 2020; Ouyang & Jiao, 2021) have suggested that the analytical and predictive aspect of AI makes it a useful tool that can be used in evidence-based decision making by teachers with regard to how learners are completing and understanding work, as well as predicting future performance. Gocen and Aydemir (2021) further suggested that AIEd could be used to make predictions and recommendations with regard to suggesting how learners could be streamed into academic or vocational pathways and also into possible career paths. This will be discussed later, as the accuracy and merit of AIEd sorting and making assumptions about learners is, for good reasons, deeply contested.

    Personalised education

    Another specific application of AIEd considered by Alam (2021) and Ouyang and Jiao (2021) is that of personalised education-defined as the customisation and adaption of learning to specific and changing learner needs. As a field, personalised education aims to provide learner-centred (explorative and collective), interactive instruction, using gamification as well as personalised assessment and rapid feedback to improve learners' experience of education (Chen et al., 2020; Halaweh, 2023). As the current prevalent implementation of personalised learning, mobile phone-based digital education dominates ITS (Chen et al., 2020; Perrotta & Selwyn, 2019). These are prevalent in mathematics and language fields (for example, Socratic and Duolingo), mostly due to subjects that involve, for example, sports and arts not being considered lucrative enough for development at this stage (Alam, 2021; Zhai et al., 2021).

    However, a concern levelled at personalised education and learning analytics is that these approaches do not guarantee high-quality education or the achievement of specific learning outcomes (du Boulay, 2000; Selwyn 2016 in Ouyang & Jiao, 2021). Personalised learning also heralded criticism from Triansyah et al. (2023) for potentially emphasising and favouring the interests of technology companies and market demands over learners' best interests. There is also concern that personalised learning is overly individualistic and may lead to learners working independently and at their own pace, undermining the social development aspects of schooling (Sellar & Hogan, 2019). In addition, Zhai et al. (2021, p. 13) noted that by engaging in personalised learning, learners may become reluctant to engage in the "knowledge processing work" needed to promote in-depth learning.

     

    Concerns about AIEd

    For every potential benefit of AIEd, there are also potential risks. Discussing AIEd, Halaweh (2023) and Sellar and Hogan (2019) noted that there remains limited empirical evidence and research on the benefits and effectiveness of AI in education due to the recent and rapid nature of technological developments. Additionally, much of the existing AIEd research involves the investigation of interactions with AI within developed countries and environments structured or scripted by teachers or researchers (Liu & Ma, 2023).

    Privatised nature of AIEd

    AIEd currently available is the property of private companies. To train and adapt personalised education systems, private companies collect, store, analyse, and report on vast amounts of learner data (Gocen & Aydemir, 2021; Sellar & Hogan, 2019). Therefore, for the most part, the use of AI in education will mostly benefit large, private EdTech companies such as Google, Microsoft, and others, through which data are collected and automation is provided. In addition, the ownership of AIEd by large private corporations could also increase inequality if access to educational tools were hindered through cost to consumers and profiteering-those who are least likely to access quality education may be further disadvantaged if content and services are retained behind a paywall. Without the correct control mechanisms put in place, learners' data could be sold for profit, which has been done in the past by companies such as Microsoft, Google, and Facebook (Cox & Kassem, 2014). In this manner, children, through their data, become marketable products within the personalised education context, and education could become even further privatised.

    Moreover, within companies, the use of black-box AI means there are potential concerns about how and for what data might be used (Triansyah et al., 2023). The internal workings of AIEd systems might not even be fully known to product developers because it can be hard to determine (especially with deep learning) what is actually being used and picked up by systems. Thus, there is a call for publicly accessible AIEd software that can be reviewed, audited, and tested by independent, interested persons (Sellar & Hogan, 2019). However, given that AIEd is usually profit-driven, the likelihood of cutting-edge, proprietary software being made public by large companies will probably not happen.

    In addition, when the nature of AIEd training is hidden, there is a risk that data collected in one context could be used to train AI for completely different contexts, which poses ethical concerns related to the impact of use by learners and their journey into broader society (Sanusi et al., 2022). Experts on AI ethics (Klimova et al., 2023; Selwyn, 2022; Triansyah et al., 2023) currently debate to what extent companies and even AI systems can be held accountable for educational outcomes, especially considering that AI systems are unable to make complex moral, ethical, and aesthetic judgements in the same way that humans can. Further, the extent to which companies that produce AI can be held accountable is also in debate.

    There are also questions about who owns the data gathered and what may be done with it because most people are unaware of the type and scope of data collected on them or the models used to analyse the data (Ouyang & Jiao, 2021; Sellar & Hogan, 2019).

    Biases in AIEd

    Because AI systems are trained on human data, and humans are not immune to bias, Chen et al. (2020) and Halaweh (2023) have maintained that AIEd and the outcomes thereof are thus inherently biased. Biases in AIEd, for the most part, stem from training data originating from Western, neurotypical, able-bodied people, meaning that already marginalised groups (including women) could be placed at an even larger disadvantage (Zhai et al., 2021). Selwyn (2022) in particular, has been vocal about evidence of this bias, citing research that shows, for example, that non-native English students may be categorised as more likely to be cheating, and higher grades may be awarded for work submitted by historically advantaged learners. Due to the black-box nature of many AI systems, individuals do not have the power to eliminate bias entirely. Instead, it is crucial to recognise this aspect and critically engage with its presence and impact when using AI systems. In addition, researchers contend that avoidance of AIEd would not remove or undo the micro-aggressions (biases) already faced by marginalised groups in schools because "socially engineered inequality" (Triansyah et al., 2023, p. 624) exists that even completely inclusive AIEd cannot fix (Sellar & Hogan, 2019; Selwyn 2022).

    These biases can also feed into decisions based on AI-gathered data that involves streaming learners into particular fields. Even with all available data, AI cannot determine learners' futures with any accuracy and should not be used by schools under the guise of "impartial" AI decision-making (Halaweh, 2023; Selwyn 2022). Even when predictions of learners' achievements are actualised, this should not be seen as a measure of AI's predictive success because this could encourage training systems to grant or withhold content or learning opportunities based either solely, or for the most part, on these AI predictions (Perrotta & Selwyn, 2019). These forms of educational predictions can create a worrying feedback loop: AI predicts an outcome, then the system enables learners towards that outcome through curated learning experiences, and when learners achieve the predicted results, the validity of that AIEd system is reinforced. This could reinforce learners' negative experiences with schooling, where a learner who receives negative messages about their academic ability might be further disadvantaged by biased AI environments. Within this loop, learners' contexts, goals, and needs are not considered, and judgements made by AIEd could become a self-fulfilling prophecy (Sellar & Hogan, 2019).

    Teaching and learning

    Discussing the extensive use of AIEd in teaching and learning, researchers (Alam, 2021; Sanusi et al., 2022) maintained that the complexity of many learning contexts and processes would need to be matched or replicated by AIEd in order for learning to be meaningful if presented or managed by AI. In addition, teaching and learning contexts can differ significantly. Given that AIEd is currently mostly developed independently of context, AIEd developed on data from one country or context might not be applicable or transferrable to other countries or contexts. This is not only with respect to language and cultural aspects, but education data and resultant patterns may not be able to be interpreted in a standardised way across all contexts (Perrotta & Selwyn, 2019). In order, therefore, for AIEd to be effective in diverse and varied teaching and learning contexts, it needs to be developed in such a way that it can truly represent-in both data and reductive decision-making-the context in which it is applied (Selwyn 2022). While AI (using neural networks to remember and connect far-related data points, which may not seem obvious to human logic) may be useful, unsupervised knowledge discovery can lead to unexpected and counterintuitive results (Halaweh, 2023; Perrotta & Selwyn, 2019). This means that blind acceptance of AIEd recommendations by teachers and administrative personnel could be harmful to learners.

    AIEd has also been critiqued for its reductive nature in that through the use of AIEd, teaching and learning is minimised to scores and data (Gocen & Aydemir, 2021). Learners and their learning are reduced to scores and metrics because these are the only things that can be interpreted by AI systems. This negates the relational nature of teaching and learning, and a learner's ability to score well in standardised testing does not necessarily mean that they have truly understood or internalised concepts and knowledge. Teaching and learning are by nature relational, and it was argued by Triansyah et al. (2023) that digital learning cannot create the social interactions necessary for human development. The fostering of positive interpersonal ties and relationships, both between teachers and learners and amongst learners themselves, is key to creating an engaging school environment (Mhlanga, 2023a). AIEd systems also cannot account for personal development and growth in learners, which is crucial in school-aged learners. In addition, AIEd also cannot replicate human moods, emotions, expertise, and perceptions, nor can it account for the agency of learners in their own learning journeys, thus potentially neglecting the important role of social relations in teaching and learning.

    Gocen and Aydemir (2021) and Goralski and Tan (2020) argued that an outcome of excessive use of AI in education automation could result in job losses for teachers. Reducing costs is sought after in a neoliberal, capitalist global economy. If AIEd were to develop to such an extent that it could complete some of the functions performed by teachers, the automation of an educator's work could result in job losses and a reallocation of funding from human resources in education to technology to reduce educational costs (Gocen & Aydemir, 2021; Halaweh, 2023; Sellar & Hogan, 2019).

    Articles by Perrotta and Selwyn (2019) and Selwyn (2022) also raised concern that both children and AIEd are intertwined in larger issues of power: tensions between schooling for public good and corporate interests, and between culture and economy. South Africa and its unique history create unique challenges in relation to AIEd in the schooling sector, which are discussed below.

     

    AIEd in South African schooling

    The above possibilities and harms hold true in the South African context. However, the context and complexity of South African schooling adds considerable additional factors that need to be considered. South Africa's racialised history means there is an ongoing correlation between class, race, and educational opportunities (Gradin, 2012). Some schools, both public and independent, boast top-tier facilities and educational resources, with low learner-to-teacher ratios and 100% Grade 12 pass rates. These are also the schools that typically have access to cutting-edge technology.

    However, a large sector of South African schools is dysfunctional (Monama, 2022). Learners' access to basic educational resources (such as textbooks) and qualified teachers in many schools is limited, with teachers struggling to manage the varied educational requirements of over-populated schools, as well as a large learner drop-out rate (Mhlanga, 2023b). Before delving into the role AIEd could possibly play in South African schools, the fundamental aspect of technological availability and reach needs to be addressed.

    The use of AI in schools requires learners to have both physical access to technology and the skills to make use of and benefit from the forms of AI in education. Barriers to physical access within the South African education context have been well documented and include difficulties with obtaining steady electrical and internet access (further compounded due to load shedding), as well as the cost of devices, electricity, and data (Bottomley, 2020; Hart, 2023; Kemp, 2023). Moreover, even when learners have physical access, many may lack requisite digital literacy skills, leaving users overwhelmed by the task of accessing and using relevant digital content (Hart, 2023). Similarly, Feldman (2024) argued that even when students have access to AI in education, it does not automatically mean that all students are able to reap the benefits of using AI, and the advanced nature of AI mechanics may raise additional intimidation to learners already struggling in marginalised schooling contexts (Sanusi et al., 2022). This lack of AI literacy may have longer-lasting impacts than school teaching and learning. In an increasingly digitalised world, where many companies embed AI and machine learning capabilities into many services, people with lower competencies may be disadvantaged in various ways, such as missing out on opportunities that would allow upward social mobility, or falling victim to targeted, biased information.

    Sellar and Hogan (2019), discussing learning within the global context, argued that it is learners who are furthest removed from digital access who could benefit the most from the access to quality education that can be facilitated by AIEd. This is also true for the South African school context. For example, in schools situated in rural areas, AIEd could be used to provide individualised, grade-specific, curriculum-aligned assistance in multi-grade classrooms. However, it is in these areas of need, that accessibility and connectivity as well as expertise in using AIEd to enhance education are often significantly limited. Mhlanga (2023b), arguing for how AI could be used to assist learners in South African schools, suggested that AI such as ChatGPT could be used to translate learning materials into learners' mother tongues to facilitate better accessibility to and understanding of subject matter. However, for the most part, learner access to digital education remains limited in marginalised schooling contexts, with the use of AIEd found predominantly in privileged schools (typically serving historically advantaged groups), leading to a deepening of inequality (Nemorin et al., 2022).

    Discussing the possibility of using AIEd and personalised learning systems to support teaching and learning in South Africa, Mhlanga (2023a) argued that these systems are unable to take into account the complex cultural and social contexts, backgrounds, and experiences present in South Africa, thus exacerbating the current two-tiered education system. Sellar and Hogan (2019) supported this claim in the global schooling context and further argued that a globalised curriculum facilitated through AIEd, such as proposed by a private company such as Pearson, could also mean that valuable, rich, local knowledge is colonised, overlooked, and lost over time. Global values, which do not necessarily align with South African circumstances, change who is taught, what is taught, how this is taught, as well as when the content is taught. This is especially relevant when the development of and research on AIEd in Africa is done by outsiders who miss critical cultural and social nuances.

    AIEd systems specifically developed by South Africans for South African school contexts (of which some do exist) could potentially provide better solutions than their international counterparts. There are a few South African companies that are currently developing AIEd for South African classrooms, for example, ADvLEARN (by the ADvTech Group) and MathU (Ntsobi et al., 2024). However, much the same arguments could be made for these companies as for those operating in an international context: these companies could undermine access to equitable, quality education and education as a public good through the privatisation of education for only those who can afford it.

    A further aspect in relation to the use of AIEd is concern over learner privacy and data usage. South Africa does have policies on privacy, data dissemination, and confidentiality of personal information-such as the Protection of Personal Information (POPI) Act-which helps to safeguard the rights of learners. Despite this, South Africa still does not sufficiently safeguard and protect the rights of learners in relation to online data safety and privacy. A report by Human Rights Watch (2022) showed that even EdTech produced by the South African government for teaching and learning during the COVID-19 pandemic actively infringed on learners' rights by collecting and using learners' data for targeted, behavioural advertising from both the National Department of Basic Education and third parties. Even if companies inform users of their surveillance and data collection practices, in most instances, neither learners, teachers, nor parents could opt out of these functions if they wanted to use the technology or online learning platforms offered by the private companies. This means that, in many cases, learners are unable to access quality education without giving up their fundamental human rights (Human Rights Watch, 2022).

     

    In conclusion

    Artificial intelligence has become an irreversible part of modern society, and its current and potential impact on education is profound. While AIEd offers transformative opportunities for innovation and personalised learning, it also demands careful scrutiny and strategic implementation.

    In the South African context, AIEd has the potential to address pressing educational challenges and support individualised learning needs. However, the unregulated use of AI could exacerbate existing inequalities if not carefully managed. Thus, there is a compelling need for empirical research to investigate how learners interact with AI and what support structures are necessary to ensure their ethical and effective use of these tools.

    In addition, it is suggested that education institutions adopt transparent data collection practices by using accessible language and granting users the right to opt out of unnecessary data sharing. Ethical concerns such as privacy, data ownership, and algorithmic biases must also be addressed through robust regulatory measures to safeguard learners' rights and promote equitable access.

    The unique cultural and socio-economic landscape of South Africa highlights the importance of implementing mechanisms to protect both learners and educators. This requires government intervention to provide funding for teacher training, equitable access to AIEd in under-resourced areas, and safeguards against private companies exploiting data under the guise of "free" services. Proactive policies promoting agency, fairness, and choice should be enacted to mitigate potential risks and maximise the benefits of AIEd.

    The responsibility for ensuring safe and ethical AI use in education should not rest solely on individual schools. Instead, comprehensive national and provincial policies must be developed to guide and support institutions in navigating this technological shift responsibly. Through coordinated efforts and thoughtful planning, AIEd can fulfil its promise of enhancing education while protecting the rights and dignity of all stakeholders.

     

    Acknowledgement

    This research was funded by the National Research Foundation (NRF), South Africa, under grant number CSUR23041894319.

     

    References

    Alam, A. (2021). Possibilities and apprehensions in the landscape of artificial intelligence in education. 2021 International Conference on Computational Intelligence and Computing Applications (pp. 1-8). IEEE. https://doi.org/10.1109/ICCICA52458.2021.9697272

    Baksa, T., Konecki, M., & Konecki, M. (2024). High school students' perception of AI and its future impact on education. 2024, 12th International Conference on Information and Education Technology (pp. 215-219. IEEE. http://dx.doi.org/10.1109/ICIET60671.2024.10542754

    Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160-1173. https://doi.org/10.1111/bjet.13337        [ Links ]

    Bottomley, E.-J. (2020, May 5). SA has some of Africa's most expensive data, a new report says-but it is better for the richer. News24. https://www.news24.com/news24/bi-archive/how-sas-data-prices-compare-with-the-rest-of-the-world-2020-5

    Chan, C. K. Y., & Hu, W. (2023). Students' voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. https://doi.org/10.1186/s41239-023-00411-8        [ Links ]

    Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510        [ Links ]

    Cope, B., Kalantzis, M., & Searsmith, D. (2020). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies. Educational Philosophy and Theory, 53(12), 1229-1245. https://doi.org/10.1080/00131857.2020.1728732        [ Links ]

    Cox, D., & Kassem, R. (2014). Off the record: The National Security Council, drone killings, and historical accountability. SSRNElectronic Journal. https://doi.org/10.2139/ssrn.2283243.

    du Boulay, B. (2000). Can we learn from ITSs? In G. Gauthier, C. Frasson, & K. VanLehn (Eds.), Intelligent tutoring systems. ITS 2000. Lecture notes in computer science (Vol. 1839, pp. 9-17). Springer.

    Eynon, R. (2024). The future trajectory of the AIED community: Defining the "knowledge tradition" in critical times. International Journal of Artificial Intelligence in Education, 34, 105-110. https://doi.org/10.1007/s40593-023-00354-1        [ Links ]

    Feldman, J. (2024). "The allure of ChatGPT": Chatbots and assessment in higher education. In Z. Barends and A. Jacobs (Eds.), Intentional assessment for teacher education: Teacher educators reflect on assessment for learning and what works best for teacher education student learning (pp. 55-78). SUNMedia.

    Gocen, A., & Aydemir, F. (2021). Artificial intelligence in education and schools. Research on Education and Media, 12(1), 13-21. https://doi.org/10.2478/rem-2020-0003        [ Links ]

    Goralski, M. A., & Tan, T. K. (2020). Artificial intelligence and sustainable development. The International Journal of Management Education, 18(1), 1-9. https://doi.org/10.1016/j.ijme.2019.100330        [ Links ]

    Gradin, C. (2012). Race, poverty and deprivation in South Africa. Journal of African Economies, 22(2), 187-238. https://doi.org/10.1093/jae/ejs019        [ Links ]

    Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), 1-11. https://doi.org/10.30935/cedtech/13036        [ Links ]

    Hart, S. A. (2023). Identifying the factors impacting the uptake of educational technology in South African schools: A systematic review. South African Journal of Education, 43(1), 1-16. https://doi.org/10.15700/saje.v43n1a2174        [ Links ]

    Hsu, B.-M. (2020). Braille recognition for reducing asymmetric communication between the blind and non-blind. Symmetry, 12(7), 1069-1084. https://doi.org/10.3390/sym12071069        [ Links ]

    Human Rights Watch. (2022, May 25). "How dare they peep into my private life?": Children's rights violations by governments that endorsed online learning during the COVID-19 pandemic. https://www.hrw.org/report/2022/05/25/how-dare-they-peep-my-private-life/childrens-rights-violations-governments

    Hwang, G., Xie, H., Wah, B. W., & Gasevic, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1(100001), 1-5. https://doi.org/10.1016/j.caeai.2020.100001        [ Links ]

    Kemp, S. (2023, February 13). Digital 2023: South Africa. Data Reportal: Global Digital Insights. https://datareportal.com/reports/digital-2023-south-africa

    Klimova, B., Pikhart, M., & Kacetl, J. (2023). Ethical issues of the use of AI-driven mobile apps for education. Frontiers in Public Health, 10(January), 01-08. https://doi.org/10.3389/fpubh.2022.1118116        [ Links ]

    Lee, V. R., Pope, D., Miles, S., & Zarate, R. C. (2024). Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Computers and Education: Artificial Intelligence, 7, 100253. https://doi.org/10.1016/jxaeai.2024.100253        [ Links ]

    Liu, G., & Ma, C. (2023). Measuring EFL learners' use of ChatGPT in informal digital learning of English based on the technology acceptance model. Innovation in Language Learning and Teaching, 18(2), 125-138. https://doi.org/10.1080/17501229.2023.2240316        [ Links ]

    Mhlanga, D. (2023a). FinTech and artificial intelligence for sustainable development: The role of smart technologies in achieving development goals. Palgrave Macmillan.

    Mhlanga, D. (2023b). ChatGPT in education: Exploring opportunities for emerging economies to improve education with ChatGPT. SSRNElectronic Journal. http://dx.doi.org/10.2139/ssrn.4355758

    Monama, T. (2022, July 11). 80% of schools are dysfunctional, serve mostly Black and Coloured pupils, says report. News24. https://www.news24.com/news24/southafrica/news/80-of-schools-attended-by-black-coloured-pupils-dysfunctional-says-report-20220711

    Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2022). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 1-14. https://doi.org/10.1080/17439884.2022.2095568        [ Links ]

    Ntsobi, M. P., Bongani, J., & Mwale, B. J. (2024). Revolutionising teaching and learning through AI: A case study of South Africa. Asian Journal of Social Science and Management Technology, 6(5), 2313-7410. https://ajssmt.com/wp6-5.html        [ Links ]

    Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2(100020), 1-6. https://doi.org/10.1016/j.caeai.2021.100020        [ Links ]

    Perrotta, C., & Selwyn, N. (2019). Deep learning goes to school: Toward a relational understanding of AI in education. Learning, Media and Technology, 45(3), 251-269. https://doi.org/10.1080/17439884.2020.1686017        [ Links ]

    Republic of South Africa. (2013) Protection of Personal Information Act 4 of 2013. https://popia.co.za/

    Sanusi, I. T., Oyelere, S. S., & Omidiora, J. O. (2022). Exploring teachers' preconceptions of teaching machine learning in high school: A preliminary insight from Africa. Computers and Education Open, 3(100072), 1-9. https://doi.org/10.1016/j.caeo.2021.100072        [ Links ]

    Sellar, S., & Hogan, A. (2019). Pearson 2025: Transforming teaching and privatising education data. Education International. https://eprints.qut.edu.au/211624/1/87483656.pdf

    Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57, 620-631. https://doi.org/10.1111/ejed.12532        [ Links ]

    Triansyah, F. A., Muhammad, I., Rabuandika, A., Siregar, P., Teapon, N., & Assabana, M. S. (2023). Bibliometric analysis: artificial intelligence (AI) in high school education. Jurnal Ilmiah Pendidikan dan Pembelajaran (JIPP), 7(1), 112-123. https://doi.org/10.23887/jipp.v7i1.59718        [ Links ]

    Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B., Yuan, J., & Li, Y. (2021). A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity, 2021(6), e8812542. https://doi.org/10.1155/2021/8812542        [ Links ]

     

     

    Received: 17 June 2024
    Accepted: 14 February 2025