OrgLab studies the evolving role of Artificial Intelligence in reshaping workforce and organizational processes. Through interdisciplinary studies - from qualitative fieldwork to quantitative modeling - we investigate how AI use transforms job profiles, skill requirements, and organizational structures. By critically examining both the benefits and unintended consequences, our research provides a nuanced foundation for evidence-based strategies in the rapidly changing landscape of work.
Our Research on Artificial Intelligence & the Future of Work
Abstract. Generative AI's diffusion raises concerns about its impact on skill development, with potential for deskilling or upskilling. This paper proposes a conceptual model, using computer programming as a reference, to delineate conditions for skill substitution (deskilling) versus augmentation (upskilling) through AI. It suggests AI substitutes tasks within its capabilities, potentially leading to atrophy of novice skills. Conversely, it augments experts on complex tasks beyond AI's reach. The model also considers the "leveling effect"—AI boosting novice performance but potentially impeding foundational learning—and "AI capability loss" from ineffective interaction. This research aims to inform strategies for skill development and retention in an AI-influenced environment.
Reference. Varone, Crowston & Bolici (2025). Generative AI and the evolution of skills: A conceptual model for skill development and retention. To be presented at the EGOS Colloquium 2025.
Abstract. We investigate the effects of the use of AI tools on skill development, retention and loss, examining whether AI use leads to reduced or increased opportunity and need for skill development. Programming is used as an example domain because of the increasing impact of AI tools in this domain. We identify new skills needed to effectively utilize AI tools, namely prompting and output evaluations. We then develop a model of the interplay of task complexity, AI capabilities and individual expertise, predicting the conditions under which skill development occurs. Finally, we create a set of hypotheses about the impacts of AI tool use for novices and expert programmers and propose research methods to test them.
Reference. Crowston and Bolici. 2025. Deskilling and upskilling with generative AI systems. In Proceedings of the iConference, 2025. Indiana Bloomington, USA.
Abstract. Integrating Artificial Intelligence (AI) into organizations requires more than simply installing new technologies. This paper introduces a decision-support framework to help leaders evaluate whether and how to implement AI, drawing on each organization’s unique structure, tasks, and goals. By linking organizational design theory with practical adoption strategies, we show how different task types—routine, engineering/craft, and non-routine—pair with three potential impacts of AI on organization dynamics: replace, reinforce, or reveal. Rather than prescribe a universal solution, our model emphasizes that effective AI implementation depends on adopting the right approach for each context. In this way, it guides decision makers in choosing the most suitable AI solution and in anticipating corresponding changes in roles, processes and culture. This clear, structured method helps organizations avoid misalignment, reduce costly experimentation, and unlock AI’s potential as a genuine driver of efficiency, innovation, and strategic value.
Reference. Bolici, Varone and DIana (2025). AI Adoption Playbook: a Decision Model for Aligning AI with Organizational Models and Task Complexity. In Theorizing AI and Data Workshop. Amsterdam, Netherlands.
Abstract. AI-powered systems are expected to profoundly reshape the healthcare sector. However, this technology is yet to consolidate, and guidelines promoting its responsible use are scattered. Thus, a considerable degree of chaoticity exists in evaluating the appropriateness of its applications. This literature review aims to map the key opportunities and risks associated with the introduction of the AI-powered tool ChatGPT in healthcare research, education and clinical practice, in order to determine which uses are considered appropriate, which are controversial, and which should be avoided. Our findings suggest that ChatGPT is a valuable tool for healthcare research. While its use as a co-author is considered unethical, it remains a legitimate tool for editing and proofreading. In healthcare education, LLMs like ChatGPT are expected to become increasingly influential in the future. This calls for a reassessment of student roles and a redesign of educational strategies to align with current technological affordances. ChatGPT is not a medical tool; its application to clinical practice should be avoided. Developing appropriate regulatory frameworks is necessary to exploit the transformative power of AI while preserving ethical and clinical standards.
Reference. Bolici, Varone and Diana (2024). ChatGPT-generated Tensions in Healthcare: A Literature Review Map for Responsible Use. In Proceedings of OBHC Conference 2024. Oslo, Norway.
Abstract. The rapid diffusion of disruptive technologies is generating a revolutionary and tangible impact over individuals, organizations and society. However, this rapid pace of development is not matched by up- to-date regulations, which makes the relationship between institutional policies and technologicaladvancements complex and controversial. Taking as a reference generative AI, this work studies how individuals respond to public interventions banning disruptive technologies, exploring the arguments and sentiment they express towards it. By analysing approximately 15,000 Twitter contributions on the suspension of ChatGPT in Italy, our work provide evidence that banning disruptive technologies is likely ineffective and unpopular. This was highlighted by the strong prevalence of individuals expressing a negative perception on the ban, by the presence of users actively and collaboratively searching solutions to bypass it, and a perceived institutional backwardness in terms of technology development.
Reference. Bolici, Varone and Diana (2024). Unpopular Policies, Ineffective Bans: Lesson Learned From ChatGPT Prohibition in Italy. In Proceedings of ECIS 2024. Paphos, Cyprus.
Abstract. The increased pervasiveness of technological advancements in automation makes it urgent to address the question of how work is changing in response. Focusing on applications of machine learning (ML) that automate information tasks, we present a simple framework for identifying the impacts of an automated system on a task. From an analysis of popular press articles about ML, we develop 3 patterns for the use of ML—decision support, blended decision making and complete automation—with implications for the kinds of tasks and systems. We further consider how automation of one task might have implications for other interdependent tasks. Our main conclusion is that designers have a range of options for systems and that automation of tasks is not the same as automation of work.
Reference. Crowston and Bolici (2019). Impacts of Machine Learning on Work. In Proceedings of the 52nd Hawaii International Conference on System Sciences, 2019.