Services on Demand
Journal
Article
Indicators
Related links
-
Cited by Google -
Similars in Google
Share
Journal for Transdisciplinary Research in Southern Africa
On-line version ISSN 2415-2005Print version ISSN 1817-4434
JTDSA vol.21 n.1 Cape Town 2025
https://doi.org/10.4102/td.v21i1.1642
EDITORIAL
Predictive large language models and their implications for transdisciplinary scholarship
Izak van Zyl
Centre for Postgraduate Studies, Cape Peninsula University of Technology, Cape Town, South Africa
The public adoption of predictive large language models (PLLMs) has surged in recent years. Today, these models are easily accessible and used across most sectors of modern society. Colloquially, they are regarded as a form of artificial intelligence (AI) and more specifically, generative AI. For accuracy, and acknowledging the longstanding scholarship around artificial intelligence, I refer to PLLMs (despite the unwieldy acronym). This is in keeping with the notion that AI is a much larger umbrella 'field' that extends well beyond large language models and their predictive or generative capacity.1
Some of the foremost models include OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude and China-based DeepSeek. Their explosion into public consciousness has triggered widespread debate around, inter alia, their utility, their bias, and their responsible, ethical use. The literary consensus seems to be that these 'tools' can enhance productivity, but present severe implications that, if unchecked, could devastate traditional modes of being, doing and thinking.2
While it is beyond the scope of this editorial to do any justice to these debates, I endeavour to offer some reflections about PLLMs and their implications for transdisciplinary scholarship. Indeed, as with most sectors, PLLMs have upended conventional tasks in scholarship because of their rapid automative and generative capabilities. But beyond their utility for menial work - for example, sorting a reference list as per the Vancouver format - how do these models impact transdisciplinarity as a concept and as a scientific pursuit?
For one, disciplinary boundaries are now more fragmented than arguably ever before. It is no longer useful to frame one's work in a stagnant or rigid 'disciplinary kingdom',3 because such a kingdom has been effectively invaded. Stated differently, PLLMs have shown that single-focused disciplines are limited and require contributions from other fields to offer a meaningful understanding of the vast implications of such models. This is because PLLMs can easily manifest as 'wicked problems', and as I have argued in previous editorials, such problems require diverse scholarly approaches.
For instance, research has shown that PLLMs result in cognitive offloading.4 This presents new and far-reaching implications for neuroscience, and for education as discipline and practice. Mass cognitive offloading may have broader economic implications, thus requiring inputs from industrial psychology, business and management sciences, and organisational studies. Predictive large language models are increasingly sought as emotional companions, resulting in implications for clinical psychology, psychiatry, sociology, medical anthropology and general health and wellness sciences. All of these disciplines require insights from computer and data scientists, machine learning specialists and informaticians to understand how PLLMs really function. Ignorance of their technical functioning might cascade into ignorance of their socio-economic, psychosocial and cultural functioning.
I am careful here not to argue for mere disciplinary collaboration. That is obviously significant, but I am rather calling for the framing of genAI and PLLMs as a profound and complex societal problem. These types of problems require transdisciplinary thinking and action, which extends beyond inter- or multidisciplinary collaboration. Indeed, complex problems usually require multifaceted solutions, grounded in, for example, the scientific, creative and business enterprise. No longer can we refer to genAI/PLLM in isolation, or with reference to its plain utility. In nearly every field, these tools - or, more accurately, systems - challenge longstanding and traditional human values. Knowledge, information, culture, belief, value, companionship, advice, partnership, truth - these are all mediated by PLLMs.
In conclusion, PLLMs are not mere tools and we should refute simplistic appeals to their utility. They are vast systems that are actively disrupting (traditional) modes of thinking, knowing and being. As such, they require a series of integrated transdisciplinary 'responses'. Transdisciplinarity here is considered both a method and an ethic. Considering the former, PLLMs merit cross-disciplinary engagements that tap into deep scholarly bodies and histories, while embracing engagement at the civil society and industrial level. In relation to the latter, a transdisciplinary ethic is one that embraces complexity, non-linearity, multiple (pluriversal) perspectives, disruption and change.
A transdisciplinary method and ethic matters because of the implications of a technology gone awry or astray. In many ways, the technology is already beyond our scope of immediate control. Like the birth and rapid evolution of the internet, this form of technology risks outpacing and displacing those at the margins, resulting in widening inequalities in labour, education and healthcare.5 It becomes a transdisciplinary imperative then, to study and address the hyper-rapid advance of PLLMs across different sectors of society. Critically, this imperative is decolonial in nature, because it subverts the hegemonic character that underpins AI 'progress' today, evident in its biased algorithms and in its ominous economic and political applications.6 Time will tell whether transdisciplinary scholarship is up to the task.
References
1. Pasquinelli M. The eye of the master: A social history of artificial intelligence. London: Verso Books; 2023. [ Links ]
2. Díaz-Rodríguez N, Del Ser J, Coeckelbergh M, De Prado ML, Herrera-Viedma E, Herrera F. Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inform Fusion. 2023;99:101896. https://doi.org/10.1016/j.inffus.2023.101896 [ Links ]
3. Van Zyl I. Disciplinary kingdoms: Navigating the politics of research philosophy in the information systems. Electron J Inf Syst Dev Ctries. 2015;70(1):1-7. https://doi.org/10.1002/j.1681-4835.2015.tb00501.x [ Links ]
4. Gerlich M. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies. 2025;15(1):6. https://doi.org/10.3390/soc15010006 [ Links ]
5. Farahani M, Ghasemi G. Artificial intelligence and inequality: Challenges and opportunities. Int J Innov Educ. 2024;9:78-99. [ Links ]
6. Crosston M. Cyber colonization: The dangerous fusion of artificial intelligence and authoritarian regimes. Cyber Intell Secur J. 2020;4(1):149-171. [ Links ]
Correspondence:
Izak van Zyl
vanzyliz@cput.ac.za











