BDVA response to the public consultation on ethical and legal requirements for AI

BDVA welcomes the possibility to provide feedback to the Inception Impact Assessment concerning a Proposal for a legal act of the European Parliament and the Council laying down requirements for Artificial Intelligence. As already highlighted in BDVA response to the AI Whitepaper, BDVA strongly supports the development of a solid AI European approach based on European values. 

BDVA response to the AI Whitepaper already focused on one of the key themes of the present Inception Impact Assessment and notably the fact that PRCS (Policy, Regulation, Certification, and Standards) issues are pivotal for building an AI ecosystem based on trust and they are likely to become a primary area of activity for the new AI, Data and Robotics Partnership. Building on these considerations, BDVA wishes to underline a few important elements concerning both the challenges identified in the Inception Impact Assessment and the possible policy options.

Comments on the issues identified

  • European businesses see Industrial AI as more of an opportunity than a threat. However, the business, economic and societal context to which AI is applied needs to be considered as decisions are not made in a vacuum but within the socio-economic context of a society of humans which, in the European case, requires the AI application to be trustworthy. 
  • Businesses are aware that AI systems may be used in value chains, and see the possibility that liabilities emerge; for example, when an AI system bases its outputs on data that is created by another AI system from a value chain partner. 
  • Requirements on AI algorithms may have to be scoped carefully; usually, an algorithm is trained before it can be used operationally (it is called a model, then) and in such case, the training data is also part of the behaviour of the AI system. Thus, requirements for AI systems may have to be extended to training data as well. It may even be considered that the specific business process in which the AI system operates can be seen as part of the algorithm, or that the design criteria (including team composition and stated business goals) could be in scope. This will become complex, so careful scoping is needed.

Comments on the policy options

  • Many stakeholders see certification in relation to AI systems as a critical trust-building mechanism for adoption of AI solutions. A methodological approach to certification could include best practice from other sectors being mapped to AI in tandem with the Standardization Landscape approach. Standards provide the foundational documentation for certification, regulation, legislation, compliance and ultimately enforcement.
  • Awareness of potential issues needs to be addressed. Voluntary certification and labelling schemes can have several benefits, both for purchasers of the certified AI system as well as for its producer. Such certification increases the confidence of users in AI systems as it indicates the producer’s commitment towards higher safety and quality standards. At the same time, however, voluntary certification should be carefully addressed as it can result in a meaningless label and evens increase non-compliant behaviour when there are no proper verification mechanisms. Voluntary labelling may make end-users more aware, just like Nutriscore intends to make consumers more aware of the features of the food that they are buying. When a voluntary labelling scheme is adopted, producers of AI will also become aware that end users may assess their products or services in a specific way; which will mean opportunities for producers who want to be transparent about their products and services.
  • It is already acknowledged in the AI Whitepaper that regulatory intervention should be targeted and proportionate. Such an approach will reduce the risk of overregulation and hence slow down technological innovation. In the Whitepaper the European Commission seemed not to want to regulate all AI systems but only high-risk AI systems. Systems that are not considered high-risk should only be covered by more general legislation mentioned above. Such a risk-based approach should be maintained. 
  • Regulatory sandboxes may provide an excellent way to enable exploratory research while still being able to effectively reduce potential risks when AI is ‘released in the wild’.

The full BDVA response to this public consultation can be found here