Services on Demand
Article
Indicators
Related links
- Cited by Google
- Similars in Google
Share
South African Computer Journal
On-line version ISSN 2313-7835
Print version ISSN 1015-7999
SACJ vol.36 n.1 Grahamstown Jul. 2024
http://dx.doi.org/10.18489/sacj.v36i1.18823
RESEARCH ARTICLE
Towards Human-AI Symbiosis: Designing an Artificial Intelligence Adoption Framework
Danie Smit; Sunet Eybers; Alta van der Merwe
Department of Informatics, University of Pretoria, Pretoria, South Africa. Email: Danie Smit - d5mit@pm.me (corresponding); Sunet Eybers - eeyberss@unisa.ac.za; Alta van der Merwe - alta.vdm@up.ac.za
ABSTRACT
Organisations need to adopt AI successfully and responsibly. AI's technical capabilities make AI powerful. However, the implementation of AI in organisations is not limited to the technical elements and requires a more holistic approach. An AI implementation within an organisation is a sociotechnical system, with the interplay between social and technical components. Considering the sociotechnical nature of AI in organisations, the following research question arises: From a sociotechnical perspective, how can an organisation increase adoption of AI as part of its quest to become more data-driven? In light of the research question, we propose to create a sociotechnical artificial intelligence adoption framework with a target audience of both academics and practitioners. This study follows a design science research approach, constituting various iterative cycles. The study is conducted at an automotive manufacturer's IT Hub based in South Africa and has the aim to gain concrete, contextual, in-depth knowledge about a specific real-world organisation. To achieve this, focus groups serve as the primary research method. As the organisation at which the study took place is seen as a global leader in industrial digital transformation, the experience can help researchers and other organisations understand how an organisation can increase the adoption of AI.
Categories · Information systems ~ Information systems applications · Computing methodologies ~ Artificial intelligence
Keywords: Adoption, Organisation, Sociotechnical, Design Science Research, Artificial Intelligence
1 INTRODUCTION
Data-driven organisations are entities that act on observed data rather than merely gut feeling and do so to achieve financial or non-financial benefits (C. Anderson, 2015). It is commonly understood that the effective use of artificial intelligence (AI) as part of an organisation's analytics portfolio, is the most advanced level of data-drivenness (Berente et al., 2021; Davenport & Harris, 2007; Gupta & George, 2016). Organisations often struggle to reach this higher level of data-drivenness (Krishnamoorthi & Mathew, 2018; Schlegel et al., 2018). Failing to do so will cause organisations to lose out on opportunities that enable faster and largescale evidence-based decision-making (Manyika et al., 2017).
When adopting AI as part of an organisation's analytics portfolio, organisations face multiple challenges, such as addressing the skill shortages and understanding how to use and reap Al's benefits (Reis et al., 2020). Realising the benefits of utilising AI (to support decisionmaking whilst having the required skills) but without the available technology platforms will hinder adoption success (IBM, 2022). This problem forms part of what is referred to as the knowledge-attitude-practice gap (KAP-gap) (Rogers, 1995). Furthermore, even when organisations adopt AI, they fall short of moving from proof of concepts to implementing AI in production environments (Benbya & Davenport, 2020). Additionally, advanced levels of data-drivenness with the support of AI, enable organisations to automate decision making (C. Anderson, 2015; Benbya & Davenport, 2020). When automated decisions have an impact on people, important legal and ethical considerations arise (Crawford, 2021). Researchers and organisations should acknowledge the importance of responsible AI adoption and ensure that the future impact of AI is beneficial (Russell et al., 2015). Therefore, the adoption of AI in organisations should not be limited to social or technical aspects, but should rather be a so-ciotechnical approach, focusing on the interplay between social and technical components of the systems within a complex environment (Wihlborg & Soderholm, 2013).
Given the requirements for organisations to solve the KAP-gap and to successfully adopt AI, considering the sociotechnical nature of AI in organisations, the following research question arises: From a sociotechnical perspective, how can an organisation increase adoption of AI as part of its quest to become more data-driven? In this paper, the successful use of AI as part of an organisation's analytics portfolio is called organisational AI adoption. In light of the research question, we propose creating a sociotechnical artificial intelligence adoption framework (AIAF) with a target audience of academics and practitioners.
This study directly extends the socio-specific artificial intelligence adoption framework presented at SAICSIT 2022 (Smit & Eybers, 2022). It follows a design science research (DSR) approach, constituting various iterative cycles. This study adds the technical aspects to the adoption framework (Smit, Eybers & van der Merwe, 2023), therefore making it a holistic sociotechnical AIAF. It is conducted at an automotive manufacturer's IT Hub1 based in South Africa and aims to gain concrete, contextual, in-depth knowledge about a specific real-world organisation. As the organisation where the study took place is seen as a global leader in industrial digital transformation (ARC Advisory Group, 2022), the experience can help researchers and other organisations understand how an organisation can increase the adoption of AI as part of its quest to become more data-driven.
The remainder of the paper is structured as follows: Section 2 discusses related research, Section 3 explains the research approach, followed by Section 4, which covers the DSR cycle and artifact development and discussion. Section 5 discusses the results and is followed by the conclusion in Section 6.
2 LITERATURE REVIEW
Big data analytics provides organisations different opportunities to achieve new levels of competitive advantage (H. Chen et al., 2017). In the case of adopting AI as part of an organisation's big data analytics portfolio, AI enables cognitive automation within organisations (M. Lacity et al., 2021). Simulating intelligence is made possible through Al's ability to learn from data and perform certain tasks autonomously (Benbya & Davenport, 2020). By becoming data-driven, organisations better understand their costs, sales potential and emerging opportunities (Johnson et al., 2019). However, organisations struggle to adopt AI successfully (Schlegel et al., 2018) and the use of AI in organisations is still relatively new. To minimise disappointments, early adopters should expect and manage technical challenges (M. C. Lacity & Willcocks, 2021), such as challenges related to data collection, model training, and deployment (Luckow et al., 2018). Furthermore, stakeholders should take cognisance of possible adoption barriers such as limited AI skills, expertise or knowledge, the lack of tools or platforms to develop models, and the projects' complexity (IBM, 2022). Additionally, considering the potential negative impact of adopting AI is part of the duties of being a responsible organisation (Crawford, 2021).
2.1 Technology Adoption Theory
Fortunately, several theoretical frameworks on technology adoption could assist organisations with the adoption challenges of AI. For example, the theory of planned behaviour (TPB) (Taylor & Todd, 1995), the theory of reasoned action (TRA) (Fishbein & Ajzen, 1977), the technology acceptance model (TAM) (Davis et al., 1989), and the technological-organisational-environmental (TOE) framework are theories that assist in the understanding of technology adoption. Whilst the TOE framework focuses on organisational-level technology adoption (Tor-natzky & Fleischer, 1990), the TPB, TRA and TAM are focused on the individual user's adoption of technologies. As the implementation of AI in organisations will impact humans and possibly the environment (and even though AI's technical capabilities are at the core of what AI offers), it is not limited to the technical elements and requires a more holistic approach (Crawford, 2021). The TOE framework includes the technological, organisational and environmental considerations, and as a result is useful as a holistic theoretical lens to study technology adoption in organisations (Tornatzky & Fleischer, 1990). Although being a traditional adoption framework, the TOE framework is appropriate as it adopts a holistic approach from an organisational perspective and allows for individual technology characteristics (Dwivedi et al., 2012).
In the context of the TOE framework, the technology refers to all the relevant technologies to the organisation. Some technological context-predicting factors significant to adoption are: compatibility, complexity (Grover, 1993), perceived barriers (Chau & Tam, 1997), technology integration (Zhu, Kraemer & Gurbaxani, 2006) and trialability (Ramdani et al., 2009). The technology innovations that create incremental change necessitate the smallest learning requirements. However, technological innovations that produce a discontinuous change (such as AI) require a substantial learning requirement and therefore have a substantial and dramatic impact on the organisation (Tushman & Nadler, 1986). An organisational context is the resources and characteristics of the organisation, including the firm size, structures between employees, intra-firm communication processes and the resource availability level (Dwivedi et al., 2012). The organisational structure and management leadership style also impact innovation adoption processes (Dwivedi et al., 2012). Additionally, within the organisational context, the scope extends beyond the organisational components to encompass the individual (Widy-asari et al., 2018). The environmental context is the milieu in which the organisation exists and includes aspects such as the industry's structure, the service providers, the regulatory environment (Dwivedi et al., 2012), competitor pressures, customer pressures, partner pressures and government pressures (Y. Chen et al., 2019).
Even though the TOE framework is widely used to study aspects that play a role in technology adoption (Dwivedi et al., 2012), it is not specifically tailored to explain how technology adoption spreads within an organisation. The diffusion of innovations (DOI) theory, as postulated by Rogers (1995) is useful to understand how the adoption of technology spreads within an organisation. The definition of diffusion in the context of innovation theory is the process by which the adoption of innovative technology is communicated through certain channels over time among the members of a social system (Rogers, 1995). During the process of adopting innovative technology, individuals typically progress through five stages, namely the knowledge, persuasion, decision, implementation, and confirmation stages (Dwivedi et al., 2012). Furthermore, according to the DOI theory, prospective adopters of innovation assess technologies based on perceived attributes of the technology (Dwivedi et al., 2012), for example its relative advantage, compatibility with other systems, complexity, trialability and observed effects (Rogers, 1995).
Innovation diffusion theory and a TOE framework are not new to researchers. Innovation diffusion theory and a TOE framework have been successfully used in several studies (Nam et al., 2019; Wei et al., 2015; Wright et al., 2017; Zhu, Kraemer & Xu, 2006). For instance, Nam et al. (2019) used the innovation diffusion process to understand the business analytics adoption of organisations and the TOE framework to identify its drivers. As this study is interested in identifying the enabling factors for the organisational adoption of AI, a combination of the DOI and the TOE is also useful (Smit, Eybers, van der Merwe & Wies, 2023).
2.2 AI Adoption
Many AI adoption models have been developed to support organisations. For example, Mo-hapatra and Kumar (2019) shows how the different sociotechnical elements interact, specifically the data collection process, which is the input into machine learning, and machine learning creating insight. The model also shows how human judgement and physical intervention is sometimes required. In contrast, Bettoni et al. (2021) propose an AI adoption model with a more organisational focus, that includes digitisation, data strategy, human resources, organisational structure, and organisational culture as the main elements influencing adoption. Furthermore, Demlehner and Laumer (2020) highlight the relevance of environmental aspects, such as legal uncertainty regarding data protection, intellectual properties and liabilities, access to external expertise, and competitive pressure. Chatterjee et al.'s (2021) study on understanding AI adoption led to similar results; however, it highlights that leadership support is a supporting factor in moderating the adoption of AI. Leaders will not be able to support AI initiatives if they lack an understanding of AI and its capabilities (Berente et al., 2021). Organisational leaders should have the required knowledge to determine whether or not they should adopt innovation and to which degree (Rogers, 1995). It is, therefore, understandable that Demlehner and Laumer's (2020) study finds that there is a deep need for expertise in AI and that a lack of AI competence is the primary reason for the low adoption rate. Although these models are useful to recognise important aspects of adoption, they do not contain information on how this can be achieved and have not considered responsible AI adoption. In addition, even though organisations worldwide are ready to invest in AI to address their sustainability goals (IBM, 2022), none of the adoption models include any aspects to assist organisations in their sustainability targets.
From an industry perspective, organisations such as Amazon AWS, Google and Microsoft provide organisations with technology platforms supported by technical guidelines or frameworks to host and enable AI applications. For example, Amazon AWS's cloud adoption framework leverages AWS experience and best practices to help organisations digitally transform and accelerate their business outcomes through innovative use of AWS. This framework includes guidance on data curation, process automation, event management (AIOps), fraud detection and data monetisation (Amazon Web Services, 2021). Google Cloud's AI adoption framework covers aspects such as the power of AI, the creation of value and AI maturity (Google, 2022). Even though this framework seems to be comprehensive, Google Cloud's framework lacks social considerations, such as trust, ethics, and fairness (Crawford, 2021). Google has guidelines on responsible AI practices (Google, 2021). However, their adoption framework does not specifically mention them (Google, 2022). Microsoft does mention responsible and trusted AI as part of its cloud adoption framework (Microsoft, 2023). One aspect that is highlighted by the majority of the technological frameworks is the importance of ethical AI. Although important, not many organisations have actively focused on ensuring that AI is trustworthy (IBM, 2022). Organisations can also learn from their previous technological adoptions, including agile principles, encapsulation of shared code into functions and components, automated testing and continuous integration (Luckow et al., 2018).
What makes AI powerful is its technical capabilities. Nonetheless, the implementation of AI in organisations is not limited to the technical elements. An AI implementation within an organisation is a sociotechnical system, with the interplay between social and technical components (Wihlborg & Soderholm, 2013). When AI makes decisions that impact people, the sociotechnical considerations in AI adoption frameworks are paramount. Organisations do not only have the requirement to be successful in the technical aspects of AI implementations but also manage the social and environmental aspects.
3 RESEARCH APPROACH
This study aimed to create a framework to support responsible AI adoption, bridging the gap between theory and practice (A. Hevner et al., 2010). Due to the objective to develop a framework of this nature, which is both of theoretical and practical value, this research followed a process of scientific rigour and is grounded in theory (Dresch et al., 2015). To allow for a systematic research and design approach, this study followed the DSR cycle steps as described by Vaishnavi et al. (2004) to create the AIAF. Peffers et al.'s (2007) includes similar steps; however, Vaishnavi et al.'s (2004) DSR cycle was selected for this study since it adopts a reduced process model and allows for a simple iterative approach. The TOE framework (Tornatzky & Fleischer, 1990) and the DOI theory (Rogers, 1995) provided the theoretical lens for this study. We employed a case study research method to construct the framework and followed a DSR main-cycle. The case study was conducted at an IT Hub. Three sub-cycles supported the DSR main cycle. All formed part of the same case study. The three sub-cycles focused on the socio-enabling factors for AI adoption (Smit, Eybers, de Waal & Wies, 2022; Smit, Eybers & Smith, 2022), the technical-enabling factors (Smit, Eybers & Bierbaum, 2022; Smit, Eybers & de Waal, 2022) and lastly a comparative analysis on the differences between adopting AI and adopting traditional data-driven technologies (Smit et al., 2024). In order to evaluate and develop the framework, further focus group sessions from the organisation's IT Hub were used. The focus groups comprised of domain experts in AI technologies. Additionally, the results of a systematic literature review on the critical success factors of AI adoption were used to enrich the findings (Hamm & Klesel, 2021). Figure 1 graphically depicts the research approach that was followed, and the DSR cycle content is described in detail in the artifact development section. To be of practical relevance, the developed framework addresses the challenges organisations face in adopting AI. Furthermore, the proposed solution not only highlights elements that influence adoption but should also include prescriptive knowledge (Baskerville et al., 2018) on how to enable organisational AI adoption.
4 DSR MAIN-CYCLE: ARTIFACT DEVELOPMENT
DSR is a methodology that enhances human knowledge and supports problem-solving by creating artifacts (A. R. Hevner et al., 2004). Constructs, methods, models and frameworks are all examples of artifacts that can be used to solve organisational problems (Dresch et al., 2015). In this study, the aim is to create a framework to support organisations with their AI adoption initiatives by following the DSR steps of Vaishnavi et al. (2004). The DSR steps are awareness of the problem, the suggestion of a solution, the development of a solution (artifact), the evaluation of the solution and, finally, a conclusion (Vaishnavi et al., 2004).
4.1 Awareness of the Problem
The organisational adoption of technologies is a broadly researched topic (Lai, 2017). However, AI technologies have characteristics that make them unique, for example, it is often anthropomorphised (Salles et al., 2020) and it can learn and act autonomously (Berente et Agerfalk, et al., 2021). Additionally, AI cannot be referred to in a monolithic sense (Agerfalk, 2020). AI can be classified based on intelligence (artificial narrow intelligence, artificial general intelligence and artificial superintelligence), based on technology (for example, machine learning, deep learning and natural language processing) or based on function (conversational, biometric, algorithmic and robotic) (Benbya & Davenport, 2020). Both scholars and industry agree that implementing AI in organisations will not replace humans in the short term but will instead enable augmented analytics within a human-AI symbiosis (human and machine partnership) (Herschel et al., 2020; Keding, 2021). The goal should be achieving full AI symbiosis, where AI can extend human cognition to address complex organisational decision-making (Jarrahi, 2018). The successful adoption of AI in organisations could lead to complex cybernetic collectives, that are far smarter than individuals. Moreover, in the quest for organisations to become more data-driven and adopt AI, organisations should be aware that AI, at a fundamental level, consists of not only the technical but also social practices (Asatiani et al., 2021) and also impacts the institutions, infrastructures, politics, and culture around it (Crawford, 2021). As a result, these complexities lead to several challenges, such as, AI's deployment problem, talent issues and social dysfunctions (Benbya & Davenport, 2020). Therefore, there is a need for a better understanding of the accepted approaches and techniques for managing organisational transformations into data-driven entities and the responsible adoption of AI. Furthermore, from the literature review, the social aspects are not well represented in the current AI adoption frameworks. The implications of neglecting AI's social aspects and impacts, for example, using the Earth's rare resources and cheap labour, with severe environmental and human costs, are well described in Crawford's book on 'ATLAS of AI' (Crawford, 2021).
4.2 Suggested Solution
The suggested solution should contain information on what influences the adoption and how to bring AI successfully into the organisation. Therefore, we propose combining the TOE framework to identify what influences the adoption and the DOI theory to investigate how to enable organisations to adopt AI. Furthermore, although not specific to AI adoption, several studies have successfully adopted the approach of combining DOI and the TOE framework (Wei et al., 2015; Wright et al., 2017; Xu et al., 2017).
Building on this theoretical basis, the proposed solution should address aspects related to the social and technical side of AI adoption in organisations. Organisations should also leverage their experience by adopting other traditional data-driven technologies. This should encompass information highlighting the similarities and differences between adopting artificial intelligence and these conventional data-driven technologies.
As mentioned in the research approach, in additional to the TOE (Tornatzky & Fleischer, 1990), DOI (Rogers, 1995) and the three aspects of AI adoption, a systematic literature review on the critical success factors of AI adoption is used to enrich the findings (Hamm & Klesel, 2021). The literature review used the TOE framework as the basis and identified 12 success factors related to the technological dimension, 13 related to organisational and 11 to the environmental dimension of the TOE framework (Hamm & Klesel, 2021).
4.3 Development of the Solution
As part of the development step of the DSR main cycle, this study uses three DSR sub-cycles. The first sub-cycle covers the social aspects of organisational AI adoption (social sub-cycle). The second sub-cycle covers the technical enabling factors related to AI adoption (technical sub-cycle) and the last DSR sub-cycle covers a comparative analysis to determine the similarities and differences between the adoption of artificial intelligence and traditional data-driven technologies (comparative analysis sub-cycle). The results of the three sub-cycles are consolidated into the AIAF and evaluated using industry focus groups. The DSR main-development step with its sub-cycles is graphically depicted in Figure 2.
4.3.1 Sub-cycles
The social sub-cycle included two studies (Smit, Eybers, de Waal & Wies, 2022; Smit, Ey-bers & Smith, 2022), which focused on 'What are the socio-enabling factors for AI adoption?' and given that ethical AI is fundamental to socially responsible organisations, 'To what extent do fairness, accountability, transparency (FAT), and explainability impact trust in AI, thereby influencing its adoption?'. The first study used the DOI theory to identify the enabling factors contributing to the successful adoption of AI (Smit, Eybers, de Waal & Wies, 2022). It was based on the five stages of the innovation-decision process, as postulated in the diffusion of innovations theory (Rogers, 1995). Out of the study, it was clear that organisational AI adoption faces numerous barriers, for example, a lack of trust in AI, lack of technological understanding and costs related to hiring highly-skilled technical expertise. Increasing knowledge, highlighting benefits and removing impediments emerged as critical social enablers throughout the AI adoption decision stages (Smit, Eybers, de Waal & Wies, 2022). The second study applied the TOE framework (Tornatzky & Fleischer, 1990) and focused on the barriers to adoption, highlighting the extent to which fairness, accountability, transparency and explainability influence trust in AI and, consequently, AI adoption (Smit, Eybers & Smith, 2022). Online questionnaires involving analytics and AI experts were analysed using structural equation modelling (SEM) as the underlying statistical methodology. This study identified trust as one of the main barriers to adopting AI in organisations. Furthermore, it found that organisations that ensure fairness, accountability, transparency and explainability as part of their AI adoption initiatives will experience a higher level of adoption (Smit, Eybers & Smith, 2022).
The technical sub-cycle also contained two studies. The first focused on the technical aspects of the KAP-gap and investigated enabling factors to support the technical aspects of organisational AI adoption (Smit, Eybers & de Waal, 2022). It focused on answering the sub-question: 'What are the technical-enabling factors for AI adoption?'. Surveys were used as the research method and were structured around innovation characteristics as postulated in the diffusion of innovation theory (Rogers, 1995). Topic modelling and the subjective analysis of the text corpus were applied to organise the response into 14 technical enabling factors. The second study (Smit, Eybers & Bierbaum, 2022) addressed the concern that the problems that organisations will face in the future are uncertain and that the exact requirements of artifacts are complex to predict (Simon, 2019). The sub-research question was 'How can augmented AI be used to communicate and evaluate theAIAF?' In this study, an augmented AI solution was built to help continuously improve the AIAF. The solution first enables the AIAF communication to people in practice (Smit, Eybers & Bierbaum, 2022). It also allows for practitioners to evaluate and provide feedback to the AIAF owner. The improvement process is supported by an AI agent called Ailea2 (Smit, Eybers & Bierbaum, 2022).
The comparative analysis sub-cycle included a study about understanding the similarities and differences between the adoption of AI and traditional data-driven technologies (Smit et al., 2024). The sub-research question was: 'What are the similarities and differences between the adoption of artificial intelligence and traditional data-driven technologies?' As organisations have gained much experience implementing traditional data-driven technologies, they can lean on this experience. However, they can leverage this experience if they understand the differences between adopting traditional data-driven technologies and AI. This understanding can allow organisations to focus where it is required. To investigate the topic, a case study research approach was followed. The case study used surveys as a data collection method. The surveys targeted a combination of business intelligence experts (Group 1: 142 questionnaires were completed) and technical experts in AI (Group 2: 14 questionnaires were completed). Most technological, organisational, and environmental considerations were the same from the case study. However, the importance of democratising AI - while considering the autonomous capabilities of AI and the need for a more human-centred AI approach - became evident (Smit et al., 2024).
In order to develop the solution, the results of the three sub-cycles, together with the critical success factors (Hamm & Klesel, 2021), were combined into an AIAF (see Figure 1). There are six main areas covered in the developed AIAF. The first is an introduction to AI in a data-driven context, and the second is a high-level overview of facilitating the AI adoption decision process (Smit, Eybers, de Waal & Wies, 2022) and enabling factors to support the technical aspects of organisational AI adoption (Smit, Eybers & de Waal, 2022). Then AI adoption critical success factors based on the TOE framework (Hamm & Klesel, 2021; Smit, Eybers & Smith, 2022). And lastly a summary of the differences between AI and traditional data-driven technologies, such as business intelligence (Smit et al., 2024).
4.3.2 AI in a Data-Driven Organisation
As traditional organisations are struggling to implement AI as part of their analytics portfolio, the goal is that the AIAF provides organisations with a high-level guide to assist in adopting AI and transforming it into more data-driven solutions. In the context of the AIAF, a data-driven organisation is defined as an organisation that uses analytical tools and abilities, that creates a culture to integrate and foster analytical expertise and acts on observed data to achieve benefits (Smit, Eybers, de Waal & Wies, 2022). The idea is not that AI replaces humans, but rather that AI can support data-driven organisations within a human-machine partnership (Herschel et al., 2020; Keding, 2021) while supporting or automating some decision-making (Benbya & Davenport, 2020). Furthermore, true data-drivenness should include forward-looking analysis, where organisations not only use data to report on the past but utilise models to predict the future in a responsible manner (C. Anderson, 2015).
4.3.3 Facilitating the AI adoption decision process
The adoption decision stages are the phases that potential adopters of AI will go through when deciding to adopt AI as part of their analytics portfolio. The stages are to increase knowledge of AI, form an attitude towards AI (persuasion stage), make a decision to adopt or reject the use of AI, then to implement AI (or not implement AI), and lastly, confirm and evaluate the decision (Smit, Eybers, de Waal & Wies, 2022). In the AIAF, each phase contains the enabling factors related to the phase and can be used by organisations to support their AI adoption initiatives. As AI technologies are ever evolving, the stages can be repeated in cycles (see Figure 3).
The framework shows each decision-making stage and the enabling factors that support adoption (see Figure 4). Increasing the knowledge of AI in organisations is the first stage of the innovation-decision process and occurs by exposing an individual or an organisation to an innovation to increase the awareness of the innovation. The communication of abilities, benefits and limitations when adopting the technology should be done to the potential adopters and decision-makers of AI (Smit, Eybers, de Waal & Wies, 2022) employees and management (who are the adopters and decision-makers) (Smit, Eybers, de Waal & Wies, 2022). This can be achieved via numerous channels, for example, forums, workshops, and training (Smit, Eybers, de Waal & Wies, 2022). Training is a key enabler to build more capabilities in AI (Chui, 2017) and should not only include awareness of AI but also how-to and principles knowledge (Rogers, 1995). Training initiatives should include training on AI tools, training on AI platforms, and training covering AI products and AI concepts (Smit, Eybers, de Waal & Wies, 2022). The training should be focused not only on employees but also on management (Rogers, 1995), as knowledge in AI is a precondition for creating strategic value from AI (Keding, 2021). Additionally, communities of practices (COP), pilot or lighthouse projects, outsourcing and analytics competence centres can be used to gain knowledge and communicate (Smit, Eybers, de Waal & Wies, 2022).
The main goal of the next stage is to develop a favourable attitude towards innovation. Many organisations may know innovations but have not adopted them yet. This stage includes highlighting the benefits of adopting AI. The same types of communication channels can achieve this during the knowledge phase. It is important to enable the organisation to grasp the importance and benefits of AI's use. One method to accomplish this is to use champions within an organisation. These champions can share previous achievements and communicate benefits to other potential adopters (Smit, Eybers, de Waal & Wies, 2022). Showing real-life examples will also boost confidence in AI and can be achieved by using workshops, demos and pilots (Smit, Eybers, de Waal & Wies, 2022). Lastly, the importance of top management support should not be underestimated (Dremel et al., 2017). The benefits and limitations when adopting AI should be known by management in order for them to support and encourage the adoption of AI (Smit, Eybers, de Waal & Wies, 2022).
In the 'decision to adopt stage', the individual weighs the advantages and disadvantages of adopting the innovation and forms an intent to adopt or reject the innovation (Rogers, 1995). It is not only an adoption decision, but a financial investment, therefore the future benefits and a positive business case is key to the adoption decision process (Chui, 2017). Furthermore, the reduction of risks is also an enabling factor, such as addressing issues of trust, explainability and fairness (Smit, Eybers & Smith, 2022).
There is a difference between the decision to adopt AI and to implement it. During the implementation stage, the organisation puts AI into use (either implementing successfully or unsuccessfully). Therefore, the specific focus is on increasing the probability of a successful go-live or implementation. This includes aspects such as involving business and getting implementation support from external providers if the specific knowledge of AI is not within the organisation (Smit, Eybers, de Waal & Wies, 2022). AI implementations have multiple challenges, like user resistance, skills shortages and substantial data engineering requirements (Smit, Eybers, de Waal & Wies, 2022).
The 'confirmation stage' of AI adoption deals with the confirmation and continuation of AI adoption. Therefore, it evaluates business value and goal achievement (Smit, Eybers, de Waal & Wies, 2022). This is important as some people in the organisation might view the business case for adopting AI as unproven, and hence might be reluctant to take the first step towards adoption (Bughin & Van Zeebroeck, 2018). The measurement of business value, the level of AI adoption, and the level of goal achievement are all enabling factors to confirm if AI adoption was satisfactory (Smit, Eybers, de Waal & Wies, 2022). The confirmation stage includes integrating the innovation into one's routine and promoting it to others, which could trigger the next cycle and start again with increasing knowledge.
As AI is a moving target and at the frontier of computational advancements (Berente et al., 2021), AI adoption should be seen as a continuum. As a result, it is essential to conserve AI adoption momentum by implementing a continuous improvement mindset. This can be supported by an innovative company culture (Chui, 2017; M. C. Lacity & Willcocks, 2021), by ensuring that the value of adopting AI is known (Smit, Eybers, de Waal & Wies, 2022) and constantly removing barriers that might hamper the adoption process (Chui, 2017).
The technical enabling factors related to the four main areas are summarised in Figure 5. The first is the importance of having a business case for implementing and adopting AI. This can be achieved by AI technologies that can make automated informed decisions and potentially lead to more efficient decision-making. Secondly, organisations should ensure proper IT governance (Smit, Eybers & de Waal, 2022), via governance bodies (Ienca, 2019) and operational processes, for example, MLOPS (Liu et al., 2020). Enabling aspects include the investment in compatibility, implementing standards and developing and following an architecture strategy. Thirdly, achieving the democratisation of AI in organisations is essential. This can be achieved by providing people access to test systems and allowing for pilot projects (Smit, Eybers & de Waal, 2022). The fourth area relates to the enterprise data platform to support analytics and AI. This includes organisational-wide data asset capability, increasing data reliably and processing power (Davenport & Harris, 2010; Wixom et al., 2021). The proposed technical-specific aspects are summarised in Figure 6. This figure is derived and based on the technical enabling factors. However, the figure has a platform focus and more depth (Smit, Eybers & de Waal, 2022).
4.3.4 Critical success factors for AI adoption
The critical success factors were derived from the TOE framework (Tornatzky & Fleischer, 1990). The AI TOE considerations are the technological, environmental and organisational elements that organisations should consider and relate to the critical success factors when adopting AI (Hamm & Klesel, 2021). The AI adoption success factors are summarised in Figure 7. From a technological point of view, organisations should ensure that the needed IT infrastructure is in place (Hamm & Klesel, 2021). This involves setting up the required data ecosystem and buying or building the appropriate AI tools (Chui, 2017). It should be done in such a way that it can lead to a relative advantage for the organisation (Hamm & Klesel, 2021). Furthermore, the characteristics of the technology should allow for observability, which enables transparency and explainability (Smit, Eybers & Smith, 2022). AI solution development should be done in a manner that renders the models more understandable to stakeholders and addresses AI interpretability needs (Asatiani et al., 2021). Top management support (Chui, 2017) and access to the required skills, competencies, and resources are some organisational success factors in adopting AI (Hamm & Klesel, 2021). Additionally, in the context of an organisation's subjective norms, ensuring fairness in AI is another organisational consideration (Smit, Eybers & Smith, 2022). Also, considerations such as slack (Rahrovani & Pinsonneault, 2012), absorptive capacity (Trantopoulos et al., 2017) and culture (Davenport & Bean, 2018) play an important role in adoption. A competitive environment is one of the main factors influencing organisations to adopt AI (Hamm & Klesel, 2021). Aspects such as governmental regulations, customer readiness and industry pressure are other examples of critical environmental considerations for organisations when striving to adopt AI (Hamm & Klesel, 2021). Additionally, aspects such as a regulatory environment insist that the organisation's accountability is set in place (Smit, Eybers & Smith, 2022). Lastly, AI also impacts its environment; the energy consumption of running large-scale AI deep learning models should not be underestimated, and the environmental impact thereof cannot be ignored (Crawford, 2021). For socially responsible organisations, managing energy consumption becomes a success factor.
4.3.5 Differences between AI and Traditional Data-driven Technologies
Understanding the similarities and differences between adopting more 'traditional' data-driven technologies and AI can benefit managers within organisations, as this information will allow them to use their experience from adopting other traditional data-driven technologies and assist them in understanding the essential differences. Most TOE considerations related to traditional data-driven technologies and AI are the same. However, some fundamental and impactful differences exist (Smit et al., 2024). Figure 8 shows the differences between AI and traditional data-driven technologies (Smit et al., 2024). None of the TOE factors were ranked as 'not relevant' (Disagree).
Traditional data-driven technologies are easier to understand than AI, which leads to the challenge of building AI knowledge and democratising AI (Alfaro et al., 2019). Furthermore, traditional data-driven technologies are more human-centred than AI (Shneiderman, 2020). For this reason, the human aspects of AI adoption should take special care, for example, ensuring ethical AI and preserving human control over AI. Lastly, AI can learn and act autonomously, and this gives AI the ability to lead to a lot of efficiencies potentially. However, the impact of AI and automation on humans must be considered, especially when considering the potentially oppressive nature of AI (Russell, 2019). Figure 9 graphically depicts the critical considerations regarding the differences between traditional data-driven technologies and AI.
4.4 Evaluation of the Solution
The proposed AIAF was evaluated through four exploratory focus groups (Tremblay et al., 2010) with six participants each. The focus group sessions occurred in 2022 and spread over eight months throughout the sub-cycles. The four focus group sessions were conducted at the same IT Hub previously mentioned (Smit, Eybers & Bierbaum, 2022; Smit, Eybers & de Waal, 2022; Smit, Eybers, de Waal & Wies, 2022; Smit, Eybers & Smith, 2022; Smit et al., 2024). Focus group sessions were used as a method to improve the framework based on their expertise. Using focus groups from industry is of value to this study as it puts the researchers in direct interaction with domain experts and potential users of the framework (Tremblay et al., 2010), with the shared target to maximise knowledge, wisdom, and creativity (Wickson et al., 2006). The participants of the focus groups were selected based on their domain expertise, and as the study is focusing on the 'how', the participants included both technology and management-orientated experts (Dresch et al., 2015). Specifically, the groups comprised of a mixture of site reliability engineers, agile masters, data engineers, business intelligence professionals, data scientists, IT governance experts, technical team leads and management.
As SCRUM is part of the organisation's agile working model, it was decided that the focus group sessions should be conducted in the form of sprint reviews (Gonçalves, 2018). A sprint review typically includes the evaluation regarding what has been achieved during a sprint, in this case, the AIAF (Gonçalves, 2018). The concept of using sprints to harden the scientific rigour of DSR was introduced by Conboy et al. (2015); however, using actual sprint reviews to evaluate artifacts is a novel research method. In contrast, applying sprint reviews to evaluate artifacts is commonly used in practice. Due to the transdisciplinary nature of this research, the novel idea of combining focus groups and sprint reviews is appropriate (Wickson et al., 2006).
The AIAF was shared with the focus group participants a few days before the actual sessions. Ailea (Smit, Eybers & Bierbaum, 2022) communicated the framework and enabled the focus group members to provide preliminary feedback on the framework. Ailea is an augmented artificial intelligence tool, that was specifically developed to assist in communicating and improving the AIAF (Smit, Eybers & Bierbaum, 2022) (see Figure 10).
For the focus group sessions, the feedback via Ailea was used as input to the discussion, and Conceptboard was used to support the collaboration and document the results for analysis. Conceptboard is an online tool for collaborative engineering design by a geographically separated team (M. Anderson et al., 2022). Figure 11 is a screenshot of the last focus group session using Conceptboard (M. Anderson et al., 2022). The area in pink on the top above the dotted line is used to present the framework to the participants. This contains the background, problem statement, the session objective, and an overview of the framework. The area in blue below the dotted line allows the participants to provide feedback. The screenshot is intended to show a high-level view of the board and the content is described below.
All the focus groups indicated that AI is different from standard systems. They pointed out that this is because, in AI, continuous 'learning' takes place based on data, compared to standard systems, which are more rule-based. Furthermore, they stated that AI encapsulates a computer-based ecosystem that aids in automation, analytics, and creativity. They additionally highlighted that AI is comprehensive and ever-growing. This benefits data analytics because it is unconstrained but presents its own risks, such as algorithm bias. The focus group participants recommended that the limitations of AI in the use of analytics be explained. A participant from the focus group, who trained business units and senior management on AI potentials, noted that many managers emphasised the need to clarify the value or benefits of AI. This is in line with the proposed AIAF that includes the benefits and value of AI in all but one adoption decision stage, especially including information on what type of problems can be solved with AI that can't be solved with traditional methods. The importance of highlighting the benefits of adopting AI is in line with the findings of previous studies (Smit, Eybers, de Waal & Wies, 2022; Tornatzky & Fleischer, 1990). The data scientists emphasised that the limitations of AI must also be made clear. Additionally, the democratisation of AI triggered discussions among data scientists. The concern was that not all people can implement machine learning responsibly. The discussion concluded that for this group, the democratisation of AI referred to allowing all entities in the organisation access to the value of AI. When it comes to building AI solutions, the required governance and controls should be put in place.
The focus groups further pointed out that the fundamentals of data-drivenness should be in place. This includes the quality and amount of data, which confirms the findings of other research related to different industries (Hamm & Klesel, 2021; Pillay & Van der Merwe, 2021). Additionally, one data scientist mentioned that more complex data structures will usually need more data to train a proper model. Over and above this, documentation is highlighted as necessary due to AI's complexity. One focus group participant mentioned: Ί believe that the documentation of AI implementation is crucial for operations, handovers and improvements'. Other fundamental aspects include a scalable infrastructure, and standard continuous integration (CI) and continuous delivery (CD) concepts. CI allows for automatically testing code and CD supports pushing code into production (Treveil et al., 2020).
The focus groups participants agreed with the findings that fairness, accountability, transparency (FAT) and explainability in AI processes lead to trust and a higher rate of AI adoption (Smit, Eybers & Smith, 2022). Additionally, experts in the focus group highlighted that to ensure AI is implemented responsibly, the FAT factors and explainability should be incorporated into the teams' daily work and not be an afterthought. The group suggested that fairness, accountability, trust and explainability should be included in the organisation's governance process. This suggestion aligns with the recommendations from Ienca (2019), who advocates that it is the responsibility of technology governance bodies to align the future of cognitive technology with democratic principles, such as fairness, accountability and transparency. Another focus group recommendation is adding an AI ethics board within organisations.
When evaluating the proposed framework, all focus groups agreed that the framework is useful as a high-level guide to help organisations on how to enable them to adopt AI. They did however point out that the target group of the adoption framework should be made clear, being managers of traditional enterprises. Some comments on the framework from a data scientist: 'Regarding the adoption stages, I believe, from a data-driven organisation point of view, the stages provided in your table are wholesome and complete. I believe such an organisation would also require a general framework within the implementation phase so that there are guidelines and standards to which the AI systems need to adhere to. This will be vital to ensure that AI use cases are streamlined according to managed guidelines and standards and prevent entropy, discord, and redundancy amongst and between developers and business units'. One data scientist endorsed the framework; however, emphasised the importance of possessing the appropriate development, platform, and operational expertise (organisational competency and resources).
4.5 DSR Cycle Conclusion
The focus group sessions with Ailea (an augmented AI chatbot) were used to communicate and gather practitioner feedback. From the feedback, it was clear that the framework was understandable and usable by practitioners to assist them in responsibly adopting AI. However, some enhancements were recommended, such as a narrower definition of AI, including governance processes (Ienca, 2019), and more focus on industrialisation and machine learning operations (Treveil et al., 2020). Furthermore, it is recommended to consider making the value-creating steps occur in a tight sequence so that the product or service will flow smoothly toward the customer, which can be achieved via CI and CD (Treveil et al., 2020). Additionally, the framework can be enhanced by stating its objective and target group, which aligns with Pee et al.'s (2021) findings.
5 SUMMARY OF FINDINGS AND IMPLICATIONS
This study describes an iteration of a DSR cycle that includes three sub-cycles. The research question under investigation was: From a sociotechnical perspective, how can an organisation increase adoption of AI as part of its quest to become more data-driven? On the theoretical bases of DOI and the TOE framework, together with three related studies, an AIAF was created. The AIAF is a high-level guide to support organisations' AI adoption journeys. Using augmented AI and exploratory focus groups, the AIAF was evaluated, and recommendations for improving the framework were provided. However, it should be mentioned that the concept of a framework alone cannot increase AI adoption. Organisations will have to successfully apply the framework to responsibly increase AI adoption.
Additionally, implementing AI in a fair, responsible, ethical and trustworthy environment requires attention. It is essential that these issues are highlighted to potential adopters during the awareness stage of the adoption decision process. The organisation where the study took place has seven principles covering the development and application of AI, namely: 'human agency and oversight', 'technical robustness and safety', 'privacy and data governance', 'transparency', 'diversity', 'non-discrimination and fairness', 'environmental and societal well-being' and 'accountability'. It was interesting to observe that the seven principles were well represented in the AIAF and the recommendations from the focus groups.
The study highlighted sociotechnical aspects to consider when adopting AI in organisations. Even though other AI adoption frameworks exist, the socio-specific considerations or impact of adopting AI are not sufficiently addressed by the frameworks (Bettoni et al., 2021; Google, 2021; Mohapatra & Kumar, 2019; Pillay & Van der Merwe, 2021) mentioned in Section 2.2. The expected contribution of the AIAF artifact is two-fold. By highlighting the sociotechnical considerations, on the one side, the framework can be used by academia and provides a high-level view of identified social elements essential for enabling the responsible adoption of AI. On the other hand, the framework offers practitioners a high-level guide, assisting managers and change mediators in promoting responsible AI adoption and transitioning traditional organisations to data-driven entities. Unlike the other frameworks mentioned in Section 2.2, this study, through the DSR approach, explains how the framework was developed and evaluated. The study also demonstrated how augmented AI allows a machine-human partnership to communicate, evaluate and improve the AIAF (Smit, Eybers & Bierbaum, 2022). Additionally, agent Ailea is prompting the evaluators of the AIAF to consider the potentially oppressive environment as a result of implementing AI in organisations.
6 CONCLUSION
The study proposes a sociotechnical framework for the organisational adoption of artificial intelligence (AIAF)3. A design science research (DSR) approach was followed to design the framework, constituting various iterative cycles and aimed to capture concrete, contextual, in-depth knowledge about a specific real-world organisation. The theoretical component of the paper was formulated by combining information systems theories, such as the TOE framework and DOI theory, with existing industry concepts, such as SCRUM review sessions, collaboration tools, such as Conceptboard, and the idea of using augmented AI to communicate, evaluate and improve DSR artifacts (Smit, Eybers & Bierbaum, 2022). Focus groups served as the primary research method. In essence, the scope of this study was limited to one IT Hub. However, the creation of the framework is grounded in sound information system theory, and the organisation in question maintains a very high digital transformation maturity and experience. Therefore, the experience and findings - though limited in their extent - can be implemented by other organisations to support the responsible adoption of AI as part of their analytics portfolio.
In conclusion, the implementation of AI can offer significant advantages to organisations. But, given the potential risks it poses to human well-being, AI must be deployed fairly, responsibly, ethically, and trustworthily. The transformative capabilities of AI cannot be ignored, highlighting the evolution's relevance towards human-AI symbiosis. The designed framework includes the following core human-AI symbiosis concepts: awareness of benefits and risks, governance processes, implementation of an AI ethics board and ethical organisational culture.
The subject's significance warrants further research, especially in evaluating the framework's applicability across diverse companies and industries. For instance, comparative analysis across sectors would show how the sociotechnical framework performs in varied organisational contexts supporting human-AI symbiosis. Furthermore, explorations into augmented AI's role in organisational communication and artifact evaluation also present promising avenues for study.
References
Agerfalk, P. J. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1-8. https://doi.org/10.1080/0960085X.2020.1721947 [ Links ]
Alfaro, E., Bressan, M., Girardin, F., Murillo, J., Someh, I., & Wixom, B. H. (2019). BBVA's data monetization journey. MIS Quarterly Executive, 18(2), 117-128. https://aisel.aisnet.org/misqe/vol18/iss2/4 [ Links ]
Amazon Web Services, I. (2021). An Overview of the AWS Cloud Adoption Framework. https://aws.amazon.com/professional-services/CAF/
Anderson, C. (2015). Creating a data-driven organisation (1st ed.). O'Reilly.
Anderson, M., Chanthavane, S., Broshkevitch, A., Braden, P., Bassford, C., Kim, M., Fantini, M., Konig, S., Owens, T., & Sorensen, C. (2022). A survey of web-based tools for collaborative engineering design. Journal of Mechanical Design, 144(1), 014001. https://doi.org/10.1115/1.4051768 [ Links ]
ARC Advisory Group. (2022). Industrial digital transformation top 25. ARC Special Report. www.arcweb.com
Asatiani, A., Malo, P., Nagb0l, P. R., Penttinen, E., Rinta-Kahila, T., & Salovaara, A. (2021). Sociotechnical envelopment of artificial intelligence: An approach to organizational deployment of inscrutable artificial intelligence systems. Journal of the Association for Information Systems, 22(2), 325-352. https://doi.org/10.17705/1jais.00664 [ Links ]
Baskerville, R., Baiyere, A., Gregor, S., Hevner, A., & Rossi, M. (2018). Design science research contributions: Finding a balance between artifact and theory. Journal ofthe Association for Information Systems, 19(5), 358-376. https://doi.org/10.17705/1jais.00495 [ Links ]
Benbya, H., & Davenport, T. H. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). https://doi.org/http://dx.doi.org/10.2139/ssrn.3741983 [ Links ]
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS Quarterly, 45(3), 1433-1450. https://doi.org/10.25300/MISQ/2021/16274 [ Links ]
Bettoni, A., Matteri, D., Montini, E., Gladysz, B., & Carpanzano, E. (2021). An AI adoption model for SMEs: A conceptual framework. IFAC-PapersOnline, 54(1), 702-708. https://doi.org/10.1016/j.ifacol.2021.08.082 [ Links ]
Bughin, J., & Van Zeebroeck, N. (2018). Artificial intelligence: Why a digital base is critical. McKinsey Quarterly. https://www.mckinsey.com/capabilities/quantumblack/our-insights/artificial-intelligence-why-a-digital-base-is-critical
Chatterjee, S., Rana, N. P., Dwivedi, Y. K., & Baabdullah, A. M. (2021). Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model. Technological Forecasting and Social Change, 170. https://doi.org/10.1016/j.techfore.2021.120880 [ Links ]
Chau, B. P. Y. K., & Tam, K. Y. (1997). Factors affecting the adoption of open systems: an exploratory study. MIS Quarterly, 21(1), 1-24. https://doi.org/10.2307/249740 [ Links ]
Chen, H., Kazman, R., Schütz, R., & Matthes, F. (2017). How Lufthansa capitalized on big data for business model renovation. MIS Quarterly Executive, 16(1), 19-34. https://aisel.aisnet.org/misqe/vol16/iss1/4 [ Links ]
Chen, Y., Yin, Y., Browne, G. J., & Li, D. (2019). Adoption of building information modeling in Chinese construction industry: The technology-organization-environment framework. Engineering, Construction and Architectural Management, 26(9), 1878-1898. https://doi.org/10.1108/ECAM-11-2017-0246 [ Links ]
Chui, M. (2017). Artificial intelligence the next digital frontier [Accessed 4 June 2024]. https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.ashx
Conboy, K., Gleasure, R., & Cullina, E. (2015). Agile design science research. In International conference on design science research in information systems (pp. 168-180). Springer. https://doi.org/10.1007/978-3-319-18714-3_11
Crawford, K. (2021). Atlas of AI. Yale University Press.
Davenport, T. H., & Bean, R. (2018). Big companies are embracing analytics, but most still don't have a data-driven culture. Harvard Business Review. https://hbr.org/2018/02/big-companies-are-embracing-analytics-but-most-still-dont-have-a-data-driven-culture
Davenport, T. H., & Harris, J. G. (2007). Competing on analytics: The new science of winning. Harvard Business School Press.
Davenport, T. H., & Harris, J. G. (2010). Analytics at work. Smarter decisions. Better results. Harvard business school publishing corporation.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management science, 35(8), 982-1003. https://doi.org/10.5555/70578.70583 [ Links ]
Demlehner, Q., & Laumer, S. (2020). Shall we use it or not? Explaining the adoption of artificial intelligence for car manufacturing purposes. Proceedings of the 28th European Conference on Information Systems (ECIS). https://aisel.aisnet.org/ecis2020_rp/177
Dremel, C., Herterich, M. M., Wulf, J., Waizmann, J. C., & Brenner, W. (2017). How AUDI AG established big data analytics in its digital transformation. MIS Quarterly Executive, 16(2), 81-100. https://aisel.aisnet.org/misqe/vol16/iss2/3 [ Links ]
Dresch, A., Lacerda, D., & Antunes, J. A. V. (2015). Design science research: A method for science and technology advancement. Springer International. https://doi.org/10.1007/978-3-319-07374-3_4
Dwivedi, Y. K., Wade, M. R., & Scheberger, S. L. (2012). Information systems theory. Explaining and predicting our digital society (Vol. 1). Springer.
Fishbein, M., & Ajzen, I. (1977). Belief, attitude, intention and behavior: An introduction to theory and research. Philosophy & Rhetoric, 10(2). https://doi.org/10.1287/isre.6.2.144 [ Links ]
Gonçalves, L. (2018). Scrum. Controlling & Management Review, 62(4), 40-42. https://doi.org/10.1007/s12176-018-0020-3 [ Links ]
Google. (2021). Responsible AI practices. https://ai.google/responsibilities/responsible-ai-practices/
Google. (2022). Google Cloud's AI adoption framework. https://cloud.google.com/resources/cloud-ai-adoption-framework-whitepaper
Grover, V. (1993). An Empirically Derived Model for the Adoption of Customer-based Interor-ganizational Systems. Decision Sciences, 24(3), 603-640. https://doi.org/10.1111/j.1540-5915.1993.tb01295.x [ Links ]
Gupta, M., & George, J. F. (2016). Toward the development of a big data analytics capability. Information and Management, 53(8), 1049-1064. https://doi.org/10.1016/j.im.2016.07.004 [ Links ]
Hamm, P., & Klesel, M. (2021). Success factors for the adoption of artificial intelligence in organizations: A literature review. AMCIS 2021 Proceedings, 10. https://aisel.aisnet.org/amcis2021/art_intel_sem_tech_intelligent_systems/art_intel_sem_tech_intelligent_systems/10/
Herschel, G., Kronz, A., Simoni, G. D., Friedman, T., Idoine, C., Ronthal, A., Sapp, C., & Sicular, S. (2020). Predicts 2020: Analytics and business intelligence strategy. https://www.gartner.com/en/documents/3978987
Hevner, A., March, S., & Park, J. (2010). Design science research in information systems. Springer. https://doi.org/10.1007/978-1-4419-5653-8_2
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105. https://doi.org/10.2307/25148625 [ Links ]
IBM. (2022). IBM Global AI Adoption Index 2022. https://www.ibm.com/watson/resources/ai-adoption
Ienca, M. (2019). Democratizing cognitive technology: a proactive approach. Ethics and Information Technology, 21(4), 267-280. https://doi.org/10.1007/s10676-018-9453-9 [ Links ]
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586. https://doi.org/10.1016/j.bushor.2018.03.007 [ Links ]
Johnson, D. S., Muzellec, L., Sihi, D., & Zahay, D. (2019). The marketing organization's journey to become data-driven. Journal of Research in Interactive Marketing. https://doi.org/10.1108/JRIM-12-2018-0157
Keding, C. (2021). Understanding the interplay of artificial intelligence and strategic management: four decades of research in review. Management Review Quarterly, 71(1), 91-134. https://doi.org/10.1007/s11301-020-00181-x [ Links ]
Krishnamoorthi, S., & Mathew, S. K. (2018). An empirical investigation into understanding the business value of business analytics. Proceedings of the 2018 Pre-ICIS SIGDSA Symposium. https://aisel.aisnet.org/sigdsa2018/28
Lacity, M., Willcocks, L., & Gozman, D. (2021). Influencing information systems practice: The action principles approach applied to robotic process and cognitive automation. Journal of Information Technology, 36(3), 216-240. https://doi.org/10.1177/0268396221990778 [ Links ]
Lacity, M. C., & Willcocks, L. P. (2021). Becoming strategic with intelligent automation. MIS Quarterly Executive, 20(2), 1-14. https://aisel.aisnet.org/misqe/vol20/iss2/7 [ Links ]
Lai, P. (2017). The literature review of technology adoption models and theories for the novelty technology. Journal of Information Systems and Technology Management, 14(1), 21-38. https://doi.org/10.4301/S1807-17752017000100002 [ Links ]
Liu, Y., Ling, Z., Huo, B., Wang, B., Chen, T., & Mouine, E. (2020). Building a platform for machine learning operations from open source frameworks. IFAC-PapersOnLine, 53(5), 704-709. https://doi.org/10.1016/j.ifacol.2021.04.161 [ Links ]
Luckow, A., Kennedy, K., Ziolkowski, M., Djerekarov, E., Cook, M., Duffy, E., Schleiss, M., Vorster, B., Weill, E., Kulshrestha, A., & Smith, M. C. (2018). Artificial intelligence and deep learning applications for automotive manufacturing. IEEE International Conference on Big Data, 3144-3152. https://doi.org/10.1109/BigData.2018.8622357
Manyika, J., Chui, M., Lund, S., & Ramaswamy, S. (2017). What's now and next in analytics, AI and automation. Retrieved September 17, 2023, from https://www.mckinsey.com/featured-insights/digital-disruption/whats-now-and-next-in-analytics-ai-and-automation
Microsoft. (2023). Microsoft clould adoption framework for Azure. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/
Mohapatra, S., & Kumar, A. (2019). Developing a framework for adopting artificial intelligence. International Journal of Computer Theory and Engineering, 11(2), 19-22. https://doi.org/10.7763/IJCTE.2019.V11.1234 [ Links ]
Nam, D., Lee, J., & Lee, H. (2019). Business analytics adoption process: An innovation diffusion perspective. International Journal of Information Management, 49, 411-423. https://doi.org/10.1016/j.ijinfomgt.2019.07.017 [ Links ]
Pee, L. G., Pan, S. L., Wang, J., & Wu, J. (2021). Designing for the future in the age of pandemics: a future-ready design research (FRDR) process. European Journal of Information Systems, 30(2), 157-175. https://doi.org/10.1080/0960085X.2020.1863751 [ Links ]
Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45-77. https://doi.org/10.2753/MIS0742-1222240302 [ Links ]
Pillay, K., & Van der Merwe, A. (2021). Big Data Driven Decision Making Model: A case of the South African banking sector. South African Computer Journal, 33(2), 55-71. https://doi.org/10.18489/sacj.v33i2.928 [ Links ]
Rahrovani, Y., & Pinsonneault, A. (2012). On the business value of information technology: A theory of slack resources. Information Systems Theory, 165-198. https://doi.org/10.1007/978-1-4419-6108-2_9
Ramdani, B., Kawalek, P., & Lorenzo, O. (2009). Predicting SMEs' adoption of enterprise systems. Journal of Enterprise Information Management, 22, 10-24. https://doi.org/10.1108/17410390910922796 [ Links ]
Reis, L., Maier, C., Mattke, J., Creutzenberg, M., & Weitzel, T. (2020). Addressing user resistance would have prevented a healthcare AI project failure. MIS Quarterly Executive, 19(4), 279-296. https://doi.org/10.17705/2msqe.00038 [ Links ]
Rogers, E. M. (1995). Diffusion of innovations (4th ed.). The Free Press.
Russell, S. (2019). Human compatible: AI and the problem of control. Penguin UK.
Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(2), 105-114. https://doi.org/10.1609/aimag.v36i4.2577 [ Links ]
Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88-95. https://doi.org/10.1080/21507740.2020.1740350 [ Links ]
Schlegel, K., Herschel, G., Logan, D., Laney, D., Judah, S., & Logan, V. A. (2018). Break through the four barriers blocking your full data and analytics potential - Keynote insights. Retrieved September 17, 2023, from https://www.gartner.com/en/documents/3876464
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. InternationalJournal of Human-Computer Interaction, 36(6), 495-504. https://doi.org/10.1080/10447318.2020.1741118 [ Links ]
Simon, H. A. (2019). The science of the artificial (3rd ed.). The MIT Press.
Smit, D., & Eybers, S. (2022). Towards a Socio-specific Artificial Intelligence Adoption Framework. Proceedings of 43rd Conference of the South African Institute of Computer Scientists and Information Technologists, 85, 270-282. https://easychair.org/publications/paper/w4WL
Smit, D., Eybers, S., & Bierbaum, L. (2022). Evaluating design science research artefacts: Acase of augmented AI. Presented at the International Conference on Design Science Research in Information Systems and Technology. https://www.usf.edu/business/documents/desrist/paper_106.pdf
Smit, D., Eybers, S., & de Waal, A. (2022). A data analytics organisation's perspective on the technical enabling factors for organisational AI adoption. AMCIS 2022 Proceedings, 11. https://aisel.aisnet.org/amcis2022/sig_dsa/sig_dsa/11
Smit, D., Eybers, S., de Waal, A., & Wies, R. (2022). The quest to become a data-driven entity: Identification of socio-enabling factors of AI adoption. Information Systems and Technologies, 589-599. https://doi.org/10.1007/978-3-031-04826-5_58
Smit, D., Eybers, S., & Smith, J. (2022). A data analytics organisation's perspective on trust and AI adoption. In E. Jembere, A. J. Gerber, S. Viriri & A. Pillay (Eds.), Artificial intelligence research (pp. 47-60, Vol. 1551). Springer International Publishing. https://doi.org/10.1007/978-3-030-95070-5_4
Smit, D., Eybers, S., & van der Merwe, A. (2023). Human-AI Symbiosis: Designing a technical artificial intelligence adoption framework. Presented at the International Conference on Design Science Research in Information Systems and Technology. https://repository.up.ac.za/handle/2263/96307
Smit, D., Eybers, S., & van der Merwe, A. (2024). AI Adoption in the Corporate Social Responsible Era: A Model for Practitioners and Researchers. AMCIS 2024 Proceedings (Forthcoming.
Smit, D., Eybers, S., van der Merwe, A., & Wies, R. (2023). Exploring the suitability of the toe framework and doi theory towards understanding ai adoption as part of sociotechnical systems. Annual Conference ofSouth African Institute of Computer Scientists and Information Technologists, 228-240. https://doi.org/10.1007/978-3-031-39652-6_15
Taylor, S., & Todd, P. A. (1995). Understanding information technology usage: A test of competing models. Information Systems Research, 6(2), 144-176. https://doi.org/10.1287/isre.6.2.144 [ Links ]
Tornatzky, L. G., & Fleischer, M. (1990). The processes of technological innovation. Lexington Books.
Trantopoulos, K., Krogh, G. v., Wallin, M. W., & Woerter, M. (2017). External knowledge and information technology: Implications for process innovation performance. MIS Quarterly, 41(1), 287-300. https://doi.org/10.25300/MISQ/2017/41.1.15 [ Links ]
Tremblay, M. C., Hevner, A. R., & Berndt, D. B. (2010). The use of focus groups in design science research. Communications of the Association for Information Systems, 26. https://doi.org/10.1007/978-1-4419-5653-8_10 [ Links ]
Treveil, M., Omont, N., Stenac, C., Lefevre, K., Phan, D., Zentici, J., Lavoillotte, A., Miyazaki, M., & Heidmann, L. (2020). Introducing MLOps. O'Reilly Media, Inc.
Tushman, M., & Nadler, D. (1986). Organizing for innovation. California Management Review, 28(3), 74-92. https://doi.org/10.2307/41165203 [ Links ]
Vaishnavi, V., Kuechler, B., & Petter, S. (2004). Design science research in information systems [Accessed 4 June 2024]. http://www.desrist.org/design-research-in-information-systems/
Wei, J., Lowry, P. B., & Seedorf, S. (2015). The assimilation of RFID technology by Chinese companies: A technology diffusion perspective. Information and Management, 52(6), 628642. https://doi.org/10.1016/j.im.2015.05.001 [ Links ]
Wickson, F., Carew, A. L., & Russell, A. W. (2006). Transdisciplinary research: characteristics, quandaries and quality. Futures, 38(9), 1046-1059. https://doi.org/10.1016/j.futures.2006.02.011 [ Links ]
Widyasari, Y. D. L., Nugroho, L. E., & Permanasari, A. E. (2018). Technology Web 2.0 as intervention media: Technology organization environment and socio-technical system perspective. Proceedings of 2018 10th International Conference on Information Technology and Electrical Engineering: Smart Technology for Better Society, ICITEE 2018, 124-129. https://doi.org/10.1109/ICITEED.2018.8534744
Wihlborg, E., & Soderholm, K. (2013). Mediators in action: Organizing sociotechnical system change. Technology in Society, 35(4), 267-275. https://doi.org/10.1016/j.techsoc.2013.09.004 [ Links ]
Wixom, B. H., Leslie Owens & Beath, C. (2021). Data is everybody's business. MIT Sloan Center for Information Systems Research.
Wright, R. T., Roberts, N., & Wilson, D. (2017). The role of context in IT assimilation: A multi-method study of a SaaS platform in the US nonprofit sector. European Journal of Information Systems, 26(5), 509-539. https://doi.org/10.1057/s41303-017-0053-2 [ Links ]
Xu, W., Ou, P., & Fan, W. (2017). Antecedents of ERP assimilation and its impact on ERP value: A TOE-based model and empirical test. Information Systems Frontiers, 19(1), 1330. https://doi.org/10.1007/s10796-015-9583-0 [ Links ]
Zhu, K., Kraemer, K. L., & Gurbaxani, V. (2006). Migration to open-standard interorganiza-tional systems: Network effects, switching costs, and path dependency. MIS Quarterly, 30, 515-539. https://doi.org/10.2307/25148771 [ Links ]
Zhu, K., Kraemer, K. L., & Xu, S. (2006). The process of innovation assimilation by firms in different countries: A technology diffusion perspective on e-business. Management Science, 52(10), 1557-1576. https://doi.org/10.1287/mnsc.1050.0487 [ Links ]
Received: 10 January 2023
Accepted: 17 October 2023
Online: 31 July 2024
1 More detail on the IT Hub can be found on the website: https://www.bmwithub.co.za/.
2 Ailea is accessible on the website: http://www.ailea.co.za/.
3 The detailed framework is accessible on the website: http://www.ailea.co.za/