Artificial Intelligence (AI) systems are being built, integrated, and deployed by organizations across the globe. The key to ensuring these systems fulfill their potential without causing undue harm lies in recognizing and engaging with the broader contexts in which they operate. Policymakers and public stakeholders increasingly expect a sociotechnical approach to AI system construction and governance that safeguards people's rights and safety.
The value of social science expertise in developing safe, effective products has been demonstrated across various technical domains since the dawn of the technology industry. Sociotechnical experts—individuals trained in fields such as sociology, anthropology, political science, law, economics, psychology, public health, geography, social work, history—have played a pivotal role in shaping technology design to meet human needs and behaviors.
Crucially, sociotechnical harms are not separate from safety considerations. Social science and humanities experts can offer valuable insights into effective governance of social systems; they can involve leadership more effectively; engage internal product teams, users, and affected communities in decision-making; and promote the adoption of evaluation and measurement methods that incorporate a deeper understanding of human behavior. Ignoring this perspective could lead to unrealized technological promises or even significant harm.
Despite frequent references to sociotechnical considerations in AI systems discussions, actionable explanations for practitioners are rare. This guide aims to clarify what constitutes a sociotechnical approach through examples of how such methods can be utilized within existing AI design, development, and deployment processes. Based on these examples, we provide nine actionable recommendations for AI teams and organizations seeking to better integrate expertise at the intersection of technology and societal dynamics into their design, development, deployment, and governance.
AI systems are not merely technical artifacts—they are embedded within social structures, organizations, societies. Applying a sociotechnical lens to AI governance means understanding how AI-powered systems might interact with each other or with people or other processes within their deployment context in unexpected ways. Sociotechnical approaches consider the human and institutional dimensions that affect how AI is used and its impact. These approaches provide invaluable insights to teams developing AI-powered systems, helping technologists understand user interactions with products, technology's effects on social groups and economies, and how technology impacts can emerge over time as AI systems and people co-evolve.
Incorporating the input of experts who can bring a more comprehensive, context-aware lens to the development and deployment of AI systems increases the likelihood that AI-powered applications will be suited for their intended deployment context and tuned to the needs of the intended user community. For more general-purpose AI systems, organizations creating and using these technologies will be better equipped to foresee unanticipated opportunities and issues, spot evolving dynamics around tool integration into existing contexts—critical factors informing AI governance.
Sociotechnical approaches draw on a varied toolkit of research methodologies including qualitative interview analysis, ethnographic research, among other qualitative techniques that complement quantitative methods familiar to most data scientists and AI engineers. In this way, sociotechnical experts are well-positioned to act as a bridge between AI development teams and communities disproportionately affected by AI systems, translating community insights into actionable plans.
In the context of developing and deploying AI systems, sociotechnical approaches include ideation and design; building and implementation; deployment and integration. The recommendations for AI developers and deployers include integrating team members with sociotechnical expertise into product teams; incorporating user experience and social science expertise in algorithm design; empowering user researchers to engage in strategic areas of inquiry beyond product usability; involving team members with sociotechnical expertise in discussions around product metrics; applying mixed-methods approaches through leaning on sociotechnical experts; allocating sufficient resources for thorough qualitative investigation; mobilizing contextual experts for public input; enabling team members with social science expertise to share lessons learned; recognizing that sociotechnical expertise can be both constructive input or critique when assumptions are faulty; engaging contextual experts in monitoring the impact of deployed AI systems.
AI is deeply intertwined with social systems, organizations, institutions, and culture. Sociotechnical approaches to AI system development and deployment are crucial to contend with the socially-embedded nature of AI to ensure that these systems are safe and effective and that their risks have been appropriately managed. People with expertise in sociology, anthropology, political science, law, economics, psychology already exist in a wide range of technical and non-technical roles in AI companies but tend to be underused in AI system development efforts. Instead, they are often relegated to siloed roles in AI ethics or governance, compliance, or pre-deployment user interface testing where they have limited input to early design and prototyping, with limited authority to substantively modify product roadmaps.
By following these recommendations, companies can more meaningfully engage existing sociotechnical expertise throughout the design, development, evaluation, and deployment process. This will deepen their capacity to integrate sociotechnical considerations into AI governance efforts more broadly. Embracing these approaches will help practitioners both produce less harmful technologies and realize more benefits of their systems for the social good. As policymakers, regulators, and the public expect those developing AI systems to foresee the impact of the technologies they are building; deeper integration of these sorts of sociotechnical approaches into the core efforts of AI development must become the default. Only through holistic and context-sensitive efforts can practitioners effectively protect people’s safety and rights in the face of all-too-rapid deployment of AI-powered technologies.