A Sociotechnical Approach to AI Policy
A new paper written by Brian J. Chen and Jacob Metcalf, for Data & Society, explains what it might mean to adopt a ‘sociotechnical’ perspective to AI policymaking.
A sociotechnical perspective is certainly broader than a technical perspective, but it is broader in a specific way: the “social” is not appended to the “technical” as an afterthought, but is integral for any explanation of the who, what, why, how, and when of a system’s outcomes. A sociotechnical perspective is most interested in what is “in the middle” between the social and the technical, as an explanatory mechanism for a system’s success or failure.
… Most importantly for policy: by seeing the “social” and “technical” together, a sociotechnical perspective expands the scope of responsibility for the consequences of technologies. It undermines the notion that developers are only responsible for their products up to the point of deployment and no further. It calls into question the belief that lawmakers’ role is to “protect innovation” from regulatory burdens, which effectively disclaims the responsibility to govern technologies. Treating technology and society as one coherent unit means that developers, deployers, and regulators are accountable not just for the mechanical workings of technical machines, but for how those machines integrate with, reshape, and sometimes harm social systems.