
I’m thrilled to announce that this extensive piece of research has been accepted as a book chapter in the upcoming book, Leveraging AI for Business Innovation, to be published by World Scientific Publishing and edited by Professor Jay Liebowitz. The book is due for release end of 2025. In the mean time, here is the abstract and the conclusions on my research into the different attitudes and behaviours people have towards AI in the workplace.
There are three key categories of human mindsets or cognitive frames, each shaped by unique and complex psychosocial factors that challenges traditional narratives or theories around AI technology acceptance. Essentially, my research is showing that AI acceptance comes down to threat avoidance or opportunity chasing. I feel that this is one of the most important papers in my research as the implications for technology design and enablement are considerable (especially if you consider the detailed and nuanced factors that have emerged in the research that have come out of the Augmented Humans project to date). I have ongoing research papers in this field that will focus on more in-depth analysis of the issues and what this means for AI developers and organisations implementing AI technologies. More on this as the research is ready for publication.
Abstract
Despite the potential of artificial intelligence to drive business performance improvement, implementation success has been limited. Individual perceptions of AI are a key factor in influencing effective implementations of AI in the workplace, however, established technology acceptance theories cannot fully explain technology threat avoidance behavior driven by complex psychosocial factors. We present a novel three-level hierarchy of AI adoption that distinguishes between passive utilitarian acceptance, to active collaboration, and co-creation. We surveyed 305 people working with technology in Europe and found declining support for AI at increased levels of technological sophistication. Three unique categories of respondents indicate unique motivations and cognitive frames for different levels of AI adoption. We draw on the Technology Threat Avoidance Theory and the Coping Model of User Adaption to provide insight into the factors that either pull or repel individuals to adopt increasingly sophisticated AI technologies in the workplace.
Conclusions
The findings of our research show declining support for AI adoption at increased levels of technological sophistication. We identified a complex heterogeneity of issues reported by our respondents regarding the adoption of increasingly sophisticated AI which raises issues for practice for organisations investing in AI as well as for AI developers. Technology threat avoidance theory can partially explain user reticence to adopt more sophisticated technologies, as it’s unclear whether AI threat avoidance may in part be attributed to genuine concern with the limitations inherent in the AI technology, the veracity of organizational workplace implementations, or its ability to provide for secure or meaningful work. The implications of our research are that if current industry challenges regarding limited returns on investment from AI technologies are prevalent today, then this may be exacerbated with the introduction of more sophisticated AI technologies in the future. Individual cognitive and behavioral responses such as threat avoidance and opportunity chasing are shaped by a perception of personal power and resources. An understanding of these drivers will enable organizations and developers to recalibrate their processes to produce improved and effective socio-technical systems and practices, and better outcomes for society as a whole.
Funding statement
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101023024 for Augmented-Humans.
