Recital 141 Conditions for testing high-risk AI systems


In order to accelerate the process of development and the placing on the marketmeans the first making available of an AI system or a general-purpose AI model on the Union market; of the high-risk AI systemsmeans a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; listed in an annex to this Regulation, it is important that providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; or prospective providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; of such systems may also benefit from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory sandboxmeans a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision;. However, in such cases, taking into account the possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient guarantees and conditions are introduced by this Regulation for providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; or prospective providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;. Such guarantees should include, inter alia, requesting informed consentmeans a subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate; of natural persons to participate in testing in real world conditions, with the exception of law enforcementmeans activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security; where the seeking of informed consentmeans a subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate; would prevent the AI systemmeans a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; from being tested. Consent of subjects, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; to participate in such testing under this Regulation is distinct from, and without prejudice to, consent of data subjects, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; for the processing of their personal datameans personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; under the relevant data protection law. It is also important to minimise the risksmeans the combination of the probability of an occurrence of harm and the severity of that harm; and enable oversight by competent authorities and therefore require prospective providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; to have a real-world testing planmeans a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real-world conditions; submitted to competent market surveillance authoritymeans the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020;, register the testing in dedicated sections in the EU database subject, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; to some limited exceptions, set limitations on the period for which the testing can be done and require additional safeguards for persons belonging to certain vulnerable groups, as well as a written agreement defining the roles and responsibilities of prospective providersmeans a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge; and deployersmeans a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; and effective oversight by competent personnel involved in the real world testing. Furthermore, it is appropriate to envisage additional safeguards to ensure that the predictions, recommendations or decisions of the AI systemmeans a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; can be effectively reversed and disregarded and that personal datameans personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; is protected and is deleted when the subjects, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; have withdrawn their consent to participate in the testing without prejudice to their rights as data subjects, for the purpose of real-world testing, means a natural person who participates in testing in real-world conditions; under the Union data protection law. As regards transfer of data, it is also appropriate to envisage that data collected and processed for the purpose of testing in real-world conditionsmeans the temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled; should be transferred to third countries only where appropriate and applicable safeguards under Union law are implemented, in particular in accordance with bases for transfer of personal datameans personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; under Union law on data protection, while for non-personal datameans data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679; appropriate safeguards are put in place in accordance with Union law, such as Regulations (EU) 2022/868(42)Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act) (OJ L 152, 3.6.2022, p. 1). and (EU) 2023/2854(43)Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act) (OJ L, 2023/2854, 22.12.2023, ELI: http://data.europa.eu/eli/reg/2023/2854/oj). of the European Parliament and of the Council.

We're continuously improving our platform to serve you better.

Your feedback matters! Let us know how we can improve.

Found a bug?

Springflod is a Swedish boutique consultancy firm specialising in cyber security within the financial services sector.

We offer professional services concerning information security governance, risk and compliance.

Crafted with ❤️ by Springflod