The three levels of AI adoption

This exerpt was a work in progress presented at the Theorising AI workshop held at the ESSEC Business School, Paris in May 2024. This was a huge piece of work that will lead to two other full research publications, but again for prosperity, I’m presenting the summary of the work that delved into unpacking the full complexities of AI adoption. It’s not as simple as we all think it is.

Drivers of the Three Levels of AI Adoption: Acceptance, Collaboration, and Co-creation

Abstract
We investigate the attitudes of professionals towards adopting and working with artificial intelligence. We propose a theoretical model of a three-level hierarchy of AI adoption that distinguishes between acceptance, collaboration, and co-creation. Each level represents higher levels of sophistication in human involvement and control in working with AI, ranging from passive utilitarian acceptance to active collaboration and co-creation. We surveyed 305 individuals working with high levels of technology in Europe on their behavioral intention in adopting artificial intelligence in their work and found declining support for AI at increased levels of technological sophistication. The objective of this work is to develop a deeper and nuanced understanding of adoption of AI to enable organisations to design, build and enable human-centric AI systems.


Method
Our online survey presented three scenarios on AI acceptance (scenario 1), collaboration (scenario 2) and co-creation (scenario 3), each depicting increasing levels of complexity of human interaction with AI. For each scenario, a question with a 5-point Likert scale was asked on whether participants would be happy to adopt the AI under that scenario, plus an open-ended question for an explanation of their answers. Participants were recruited from the prolific.com platform and a total 305 eligible surveys were used in our analysis using SPSS and Atlas.ti software.


Key results
We found declining support for AI at increased levels of technological sophistication. Positive intentions to adopt AI (those who selected Strongly Agree or Agree) started at 86% for scenario 1 (acceptance of basic AI), then dropped to 63% for scenario 2 (collaborating with AI), and down to 59% for scenario 3 (co-creating AI).

Our analysis found the following thematic clusters:
Positive intentions cluster: Five categories were derrived from 109 thematic codes. These respondents only wrote positive responses and no negative qualifiers were mentioned.
Neutral intentions cluster: Seven categories were derrived from 65 codes. These respondents mentioned the positive themes recorded in the positive cluster, but also added a negative qualifier.
Negative intentions cluster: Seven categories were derrived from 49 codes. These respondents tended to only write negative statements.
While there was some overlap between the neutral and negative clusters (e.g. job losses) the responses tended to be unique between these clusters. For example, many neutral respondents focussed on implementation challenges whereas negative respondents did not. For negative respondents, mention was made of active and passive resistance to organisational adoption of AI, whereas this was not mentioned in the other clusters.

Conclusions
Our conceptual model of the three levels of AI technology adoption is supported given the levels of definitive rating and nuanced explanations given by our survey respondents.
The range of views in each cluster indicate unique challenges for developers and organisations implementing sophisticated AI technologies.
Implications are significant for the current discourse on human-AI collaboration and emergent research in co-creation and AI alignment.

Funding Statement

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101023024 for Augmented-Humans. 

Leave a comment