Co-regulation or Capitulation ? Addressing conflicts arising by AI and standardization
Note. [pp : 11-23]
While an enormous number of business models and opportunities based on artificial intelligence (AI) turn it into an essential technology for competitiveness in the digital age, risks arise as well, recognized globally in a vast amount of policy statements. An adequate regulation that reconciles high-level ethics, dynamic technological progress and enforceable rules calls for cooperation, which can be found in legally referenceable technical standards. Such co-regulation reduces frictions between static rules and dynamic technology and allows for a flexible and dynamic legal framework for AI. But standard-setting is subject to strong competition and not without conflict. The implications of competition for AI-standards and differing ethics and values on AI-standardization are not yet clear. Competition due to diverging ethical approaches and ambitions means that standardization is more than a merely technical issue. While this aspect is reflected in part by AI-standards presented in this paper, important specifications and guidance for foreseeable collisions and conflicts are missing. This has to be accounted for in emerging regulation of AI. Further concretization with regard to the structure, competencies and boundaries of co-regulation is necessary. This paper pursues these issues with a focus on conflict and convergence in the regulatory framework of AI applications across jurisdictional boundaries. It provides insight in emerging AI-standards and obstacles for cooperation in national approaches to AI, thereby offering a starting point for further research regarding regulatory frameworks that incorporate AI-standards as an instrument of co-regulation. This paper shows that standards form already an important instrument in AI-regulation and outlines three approaches how to advance this development, indicating that the challenges for co-regulation of AI can most likely be mastered.