Context:
Recently, members of the European Parliament reached a preliminary deal on a new draft of the European Union’s ambitious Artificial Intelligence Act.
Aims of the AI Act:
- To bring transparency, trust, and accountability to AI.
- To create a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU.
- To address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.
- To strike a balance between promoting the uptake of AI while mitigating or preventing harms associated with certain uses of the technology.
What does the draft document entail?
- Definition of AI: AI is a software that is developed with one or more of the techniques that can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
- Classification of AI tech: On the basis of the level of risk they pose to the “health and safety or fundamental rights” of a person.
There are four risk categories in the Act — unacceptable, high, limited and minimal.
- Unacceptable risk category: The Act prohibits using technologies in this category.
These include:
-
- the use of real time facial and biometric identification systems in public spaces;
- systems of social scoring of citizens by governments leading to “unjustified and disproportionate detrimental treatment”;
- subliminal techniques to distort a person’s behaviour;
- technologies which can exploit vulnerabilities of the young or elderly, or persons with disabilities.
- High risk category: The Act prescribes a number of pre and postmarket requirements for developers and users of such systems.
Some systems falling under this category:
-
- biometric identification and categorisation of natural persons,
- AI used in healthcare, education, employment (recruitment), law enforcement, justice delivery systems,
- tools that provide access to essential private and public services.
- Database of high risk AI systems: The Act envisages establishing an EU wide database of high risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high risk criteria.
- Conformity assessments: Before high risk AI systems can make it to the market, they will be subject to strict reviews known in the Act as ‘conformity assessments’— algorithmic impact assessments to analyse data sets fed to AI tools, biases, and how users interact with the system.
- Limited and minimal risk category: AI systems in the limited and minimal risk category such as spam filters or video games are allowed to be used with a few requirements like transparency obligations.
News Source: The Hindu
To get PDF version, Please click on "Print PDF" button.