Introduction

With the emergence of AI, researchers have argued for more collaborations across disciplines to better understand AI in a social context (Theodorou and Dignum, 2020; Tomašev et al., 2020; Jobin et al., 2019; Perc et al., 2019; Sloane and Moss, 2019; Courtland, 2018). To bring about such collaborations, it will be of great importance to address the current gap between technological and social analyses of AI.

In the scientific community, research on AI is commonly divided into technological concerns (connected to natural sciences and engineering) and social concerns (connected to social sciences and humanities). These two strands have been largely disconnected from each other in research. Even when the social impact of AI is recognised, there is typically a sequential separation in that AI is viewed first as a technical object that only later, after it has been implemented, may have social consequences.

This disconnection is contradictory and creates practical and analytical problems for the simple reason that technology is always already social (Latour and Woolgar, 1979). For example, if attempting to dissect an AI system, it would be difficult to distinguish human material from nonhuman material. For the same reason, it is too simplistic to say that humans cooperate with material objects when they encounter AI. Technology can therefore not be approached as a neutral object, separated from things referred to as social. To better understand AI technology in the context in which it operates, the inseparability of these two concerns needs to be reflected in AI research.

This commentary paper discusses how some of the challenges of AI research relate to the gap between technological and social analyses, and it proposes steps ahead for future AI research to practically achieve prosperous collaborations.

Oversimplifying the task to be automated

A critical step in any AI development project is the identification of a task to be automated. This entails a clear understanding of the technical as well as the social capabilities a system requires. These capabilities are often not developed in sync, which can lead to malpractice. For example, in Poland, an AI system was designed to advance efficiency in Public Employment Services (PES) through algorithmic decision-making (Sztandar-Sztanderska and Zielenska, 2018). The purpose of the system was to profile unemployed individuals to determine which programmes they were eligible for, but the case counsellors did not inform the unemployed about the data they collected and how it was used (Sztandar-Sztanderska and Zielenska, 2020). Information about individual characteristics of the unemployed was gathered in the profiling system, which then applied an algorithm to classify the unemployed into one of three categories, without the unemployed knowing into which category they were placed or why (Niklas et al., 2015). The system essentially categorised the unemployed as good or bad investments, leading the Human Rights Commissioner to establish the algorithm’s decisions as unjust, and in the end the system was banned (Kuziemski and Misuraca, 2020). A central issue here was that there was no clear plan for how human judgement and algorithmic decision-making could be joined and enacted in a public service context (Zejnilovic et al., 2020).

To understand why the system failed, scholars in social sciences and humanities would not only turn to the system itself, but also analyse the social context in which the system was set to operate (Akrich, 1992). The exclusion of such analysis in the development of the system reveals that the assignment (specification of needs and purpose of the system) was oversimplified. In this case, the social implications of the system were underestimated. Here it is reasonable to suggest that some of the problems that led to the system’s failure could have been detected prior to implementation if the developers had incorporated a social analysis conducted by researchers with expertise in ethics, social science, and law. Applying social analysis of the planned technology during the idea and innovation stage would offer developers a better chance at forecasting and managing potential social challenges. This would mean widening the assignment of AI development to incorporate a thorough analysis of the broader context in which the system is expected to operate.

The example from the PES in Poland highlights that the social impact of an AI system cannot be reduced to a separate issue to be dealt with after the technical development. In this regard, engineers need to recognise that in designing AI models they are involved in social practices that shape society. To improve the chances of acceptable and trusted AI systems, a lesson for future projects may therefore be to incorporate social analyses of the technical design at the outset.

Reducing system acceptance to a technical question

Recent studies in radiology have shown how AI can decrease bias (Miller, 2018) and even outperform human radiologists in medical-image analysis (McKinney et al., 2020; Ardila et al., 2019; Topol, 2019). Although this technical progress is promising, developers struggle to incorporate such innovations into practice. For example, it is not uncommon for AI systems to be so complicated that even their developers cannot explain precisely how their creation reached a specific result (Riley, 2019). It is therefore not surprising that many systems are still black-boxed to their intended users (Castelvecchi, 2016). A common user request is the ability to assess an AI system’s analysis (Ostherr, 2020; He et al., 2019; Rahwan et al., 2019; Vinuesa et al., 2020). In response to such demands, data scientists and developers of AI technologies have been asked to prioritise the explainability and transparency of AI systems (Watson et al., 2019). Here it is important not to limit the challenge of producing acceptable systems to improving explainability or transparency. To create truly acceptable systems, it is equally important that human–AI interaction be given social attention. For example, no AI systems operate in complete isolation from humans, and the functionality of an AI system therefore, to some extent, depends on being tolerated by humans. To build systems that users can accept is a task that involves multifaceted issues of coordinating human–AI interaction.

An illuminating example is the Probot, capable of autonomously carrying out surgical tasks during prostate resection (Harris et al., 1997a; Cosio and Davies, 1999). First, surgeons place the system in the correct starting position, and the system then autonomously carries out tasks such as removing conical segments of tissue, with the surgeon’s role reduced to controlling an emergency stop button (Mei et al., 1996; Harris et al., 1997b). The Probot had been tried on patients with satisfactory results (Mei et al., 1999) and surgeons thought that such automation features were wanted (Rodriguez y Baena and Davies, 2009). However, problems arose when it came to the implementation of the Probot. The surgeons felt passivised as they were largely reduced to observers, and they therefore expressed unease with the system (Yip and Das, 2017). The system was ultimately rejected since surgeons perceived a greater than anticipated need for continuous interaction with their patients during procedures (Rodriguez y Baena and Davies, 2009).

This example illustrates how the acceptance of a system entails not only an understanding of its technical capabilities but also of the multifaceted social concerns that may arise from it. While it is indeed important to improve explainability and transparency in AI systems in order for their intended users to fully understand the system (especially in clinical practice), the Probot example shows that such understanding in no way guarantees system acceptance. Other factors, such as the people working with the system feeling comfortable, here played a significant role when the Probot failed. On the other hand, for other AI systems, some users may accept and use a system without fully understanding its technical design. It is therefore important that the scope of analysis, regarding what makes an AI acceptable, is widened beyond transparency and explainability. As demonstrated in this example, gaining users’ acceptance of an AI system in their practice is an achievement that reaches well beyond the technical capabilities of the system. To address the multifaceted issue of system acceptance, future development projects could engage in more thorough social analyses of trials carried out in the environment where the AI system is going to operate. Incorporating a social analysis of users’ interaction with the technology, in real-life settings, could generate important insights into what is required for system acceptance.

Assuming that practices are stable

An AI system often changes the premises upon which a practice is based. It is therefore somewhat misleading to think of AI systems as merely tools. Take the stock-trading practice for example. The introduction of AI systems has entirely reshaped the conditions of the stock market, and the current stock-trading practices would not exist without AI systems (Callon and Muniesa, 2005). The fact that human traders now work alongside automated trading systems has significantly impacted how trading is carried out (Rundle, 2019; Brynolfsson and McAfee, 2018). The role of financial analysts has also changed; in addition to analysing how different human actors may respond to different events, they now need to anticipate how the automated trading systems will act (Lenglet, 2011). Since such systems have become central and fully fledged actors on the stock market, they cannot be viewed simply as tools. The trading algorithms are not passive tools but are actively involved in shaping the market (Callon and Muniesa, 2005). The stock market example illustrates how people may adjust their behaviour to AI systems, and it illuminates how AI technology may shift the socio-technical relations in a practice.

Such changes to socio-technical relations are also a concern in the training of AI systems. Training models using data on how people behave in a practice without AI do not reflect how people may behave in a practice with a new AI system implemented. That is, the underlying reality for an AI system is subject to change as soon as the system is introduced in the practice. An important lesson here is that since AI systems intervene with the underlying conditions of a practice, strategies are needed to cope with the fact that the validity of training data is obstructed in the moment a system is implemented in a real-life setting. That is, real-life humans may change their behaviour and reasoning in response to the implemented AI system, which challenges the relevance of the training data.

Acknowledging that practices are not stable but subject to change, AI development projects should incorporate a continuous analysis of changes to the practice. This means that trials need to be executed in real-life settings, since it is in the meeting between the imagined user and the actual user that one can explore how the AI comes into play. A social analysis during long-term implementation of an AI system can attend to changes in socio-technical relations. By recognising that AI systems may change the practices for which they are built, future projects can have a better chance at ensuring that AI systems bring changes that are desirable.

Directions forward

The three examples draw attention to some of the problems with a disconnection between substantial analyses of technological and social concerns, such as: (1) oversimplifying the task being automated, (2) reducing system acceptance to a technical question, and (3) assuming that practices are stable. For future research and practice to overcome such issues, this commentary paper suggests that systematic and substantial social analyses should be integral to future projects that develop AI systems—from early innovation, to technical design, to long-term implementation. Here, the paper sheds light on the need for future projects working to develop AI systems to continuously synchronise their attention to social and technological concerns, investigating how their technology is built, and how it could be built differently with different social consequences. Exploring the connections between an AI’s technical design and its social implications will be key in ensuring feasible and sustainable AI systems that benefit society and that people want to use.

Regarding directions forward—trying to join human and AI work in practice—disciplines and sciences need to be better prepared and collaborations need to take place beyond disciplines. Funders and universities alike play an important role in supporting and facilitating such efforts. What is needed includes university initiatives that are truly multi-disciplinary and span boundaries between natural and social sciences. Such initiatives are important in order to create environments through which researchers from different fields can connect, initiate collaborations, and work together. To meet such future needs, engineering, natural sciences, and social sciences will be required to work together in new ways. The great potential here lies in coordinating different types of expertise, such as in-depth theoretical knowledge in social sciences with a wide range of natural sciences and engineering knowledge. It will simply not be enough for engineers, for example, to learn basic skills in social sciences, or for social scientists to learn basic skills of algorithms. Researchers’ front-edge expertise from a wide range of knowledge areas should be woven together to combine and cross-fertilise ideas and insights from different disciplines across the sciences (such as law, engineering, social sciences, political science, philosophy, psychology, anthropology, and other disciplines). It is in such environments that new theories and methods can be developed. Projects that find new ways to connect technological and social analyses will be better equipped to understand and influence how AI changes society.

This commentary paper has focused on drawing attention to an analytical gap in AI development and a related scarcity of multi-disciplinary research. The identified issues call for more research and further discussion on how multi-disciplinary research collaborations could be coordinated to approach AI development more comprehensively.