A European approach to Artificial Intelligence

Balancing Innovation and Ethics

Europe’s AI Strategy: Balancing Innovation and Ethics. With a focus on transparency, privacy, and fairness, European initiatives aim to harness AI’s potential while upholding human values. This approach emphasizes collaboration between governments, industry, and academia to ensure responsible AI development for societal benefit.


Sources:

  • https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
  • All the content presented below was taken only from official European parliament sources, with the exception of an AI-generated definition.

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

In April 2021, the Commission presented its AI package, including:


A European approach to excellence in AI

There are 4 main principles described here in order to ensure excellence in AI and strength Europe’s potential to compete globally:

  1. enabling the development and uptake of AI in the EU;
  2. making the EU the place where AI thrives from the lab to the market;
  3. ensuring that AI works for people and is a force for good in society;
  4. building strategic leadership in high-impact sectors.

There will be high investment from UE in AI through  Horizon Europe and Digital Europe programmes. The Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade. The Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

[!ai]+ AI

The Recovery and Resilience Facility (RRF) is a financial instrument developed by the European Union (EU) to support member states in recovering from the economic and social impacts of the COVID-19 pandemic. It is part of the larger EU Recovery Plan, which aims to build a greener, more digital, and resilient Europe. The RRF provides significant financial assistance through grants and loans to help countries invest in various areas such as healthcare systems, education, research and innovation, climate change mitigation, digitalization, and job creation. The facility seeks to not only address immediate recovery needs but also promote long-term economic transformation and resilience across the EU.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.


A European approach to trust in AI

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. European legal framework for AI that upholds fundamental rights and addresses safety risks specific to the AI systems;
  2. civil liability framework - adapting liability rules to the digital age and AI;
  3. a revision of sectoral safety legislation (e.g. Machinery RegulationGeneral Product Safety Directive).

The legal framework for AI, or AI Act, has a clear, easy to understand approach, based on four different levels of risk: minimal risk, high risk, unacceptable risk, and specific transparency risk. It also introduces dedicated rules for general purpose AI models.


Important milestones


Milestone in depth

Each milestone will be studied and the main points that resulted from each one presented. They will be presented in chronological order in order to facilitate understanding and draw up a roadmap. Some non relevant events are not presented below.

Press release: AI expert group and European AI alliance (March 2018)

  • EU understands de the need of guidelines on AI ethics.
    • From better healthcare to safer transport and more sustainable farming, artificial intelligence (AI) can bring major benefits to our society and economy. And yet, questions related to the impact of AI on the future of work and existing legislation are raised. This calls for a wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.
  • To study and understand this in a more detailed way, EU commission opened applications to join an expert group in artificial intelligence. This group will:
    • Advise the Commission on how to build a broad and diverse community of stakeholders in a “European AI Alliance”
    • Support the implementation of the upcoming European initiative on artificial intelligence
    • Come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU’s fundamental rights.

European Comission: Coordinated plan on AI (December 2018)

  • This plan proposes joint actions for closer and more efficient cooperation between Member States, Norway, Switzerland and the Commission in four key areas: increasing investment, making more data available, fostering talent and ensuring trust.
  • Andrus Ansip European Commission Vice-President said “I am pleased to see that European countries have made good progress. We agreed to work together to pool data – the raw material for AI – in sectors such as healthcare to improve cancer diagnosis and treatment. We will coordinate investments: our aim is to reach at least €20 billion of private and public investments by the end of 2020. This is essential for growth and jobs. AI is not a nice-to-have, it is about our future
    • Commissioner for Digital Economy and Society Mariya Gabriel added: “Like electricity in the past, AI is transforming the world. Together with Member States we will increase investments for rolling out AI into all sectors of the economy, support advanced skills and maximise the availability of data. The coordinated action plan will ensure that Europe reaps the benefits of AI for citizens and businesses and competes globally, while safeguarding trust and respecting ethical values.
  • Representatives of Member States, Norway, Switzerland and the Commission have agreed to:
    • Maximise investements through partnerships: at least €20 billion of public and private investments in research and innovation in AI from now until the end of 2020 and more than €20 billion per year from public and private investments over the following decade. Joint actions to achieve these investment objectives include National AI strategies, a new European AI public-private partnership, a new scale-up fund, developing and connecting world-leading centres for AI.
    • Create European data spaces: Large, secure and robust datasets need to be available for AI technology to be developed. Together with European countries, the Commission will create common European data spaces to make data sharing across borders seamless, while ensuring full compliance with the General Data Protection Regulation. Omega-x is a good example of this.
    • Nurture talent, skills and life-long learning: the Commission, together with European countries, will support advanced degrees in AI through, for example, dedicated scholarships. The Commission will also continue to support digital skills and lifelong learningfor the whole of society, and especially for workers most affected by AI, as detailed in its AI strategy.
    • Develop ethical and trustworthy AI: AI raises new ethical questions, for example potentially biased decision-making. To create trust, which is necessary for societies to accept and use AI, the coordinated plan aims to develop a technology which respects fundamental rights and ethical rules. A European group of experts, representing academia, business, and civil society, is working on ethics guidelines for the development and use of AI.

Ethics guidelines for trustworthy AI (April 2019)

  • According to the Guidelines, trustworthy AI should be:
    1. Lawful - respecting all applicable laws and regulations
    2. Ethical - respecting ethical principles and values
    3. Robust - both from a technical and social perspective
  • The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy:
    1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
    2. Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible.
    3. Privacy and data governance: the user is always the data owner. Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
    4. Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned.
    5. Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
    6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly.
    7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications.

The first European AI Alliance Assembly (June 2019)

  • The event marked the one year anniversary of the creation of the European AI Alliance platform. It brought together stakeholders, including citizens, and policymakers to discuss the latest achievements in AI policy as well as future perspectives of the European Strategy on Artificial Intelligence, including its impact on the economy and society.
  • The European AI Alliance is a forum that engages more than 3000 European citizens and stakeholders in a dialogue on the future of AI in Europe. After launching the European AI Strategy in April 2018, the Commission formed the High-Level Expert Group on AI (AI HLEG) to draft Ethics Guidelines for Artificial Intelligence and AI Policy and Investment Recommendations. In parallel, the European AI Alliance, a multi-stakeholder forum, was established to provide diverse societal input to the AI HLEG and EU policy-making. Together, they have played crucial roles in shaping the European approach to artificial intelligence.
  • The event can be seen here: https://www.youtube.com/watch?v=jkXwAxhoXX8

Pilot the Assessment List of the Ethics Guidelines for Trustworthy AI (December 2019)

  • The Assessment List for Trustworthy AI is the operational tool of the Ethics Guidelines for Trustworthy AI, aiming to ensure that users benefit from Artificial Intelligence (AI) without being exposed to unnecessary risks. This list, first presented with the Ethics Guidelines in June 2019, was revised following a piloting process that involved more than 350 stakeholders.
  • Feedback on the assessment list was given in three ways:
    1. An online survey filled in by participants registered to the process;
    2. The sharing best practices on how to achieve trustworthy AI through the European AI Alliance;
    3. A series of in-depth interviews (selected organisations marked their interest)
  • This feedback helped better understand how the assessment list can be implemented within an organisation. It also indicated where specific tailoring of the assessment list was still needed, given AI’s context-specificity. The piloting phase took place from 26 June until 1 December 2019.
  • Based on the feedback received, the High-Level Expert Group on AI proposed a revised version of the assessment list, accompanied by a prototype web-tool, to support AI developers and deployers in developing Trustworthy AI.

White Paper on Artificial Intelligence: Public consultation towards a European approach for excellence and trust (February 2020)

  • Aiming to promote the uptake of artificial intelligence (AI) while at the same time, addressing the risks associated with its use, the European Commission has proposed a White Paper (.pdf) with policy and regulatory options “towards an ecosystem for excellence and trust”. This document was published on 19 February 2020, along with an online survey (.pdf), focusing on three distinct topics:
      1. Specific actions for the support, development and uptake of AI across the EU economy and public administration;
    1. Options for a future regulatory framework on AI;
    2. Safety and liability aspects on AI (as outlined in the relevant report)

Final Assessment List for Trustworthy Artificial Intelligence (July 2020)

  • Following a piloting process where over 350 stakeholders participated, an earlier prototype of the list was revised and translated into a tool to support AI developers and deployers in developing Trustworthy AI.
  • The tool supports the actionability the key requirements outlined by the Ethics Guidelines for Trustworthy Artificial Intelligence (AI), presented by the High-Level Expert Group on AI (AI HLEG) presented to the European Commission, in April 2019. The Ethics Guidelines introduced the concept of Trustworthy AI, based on seven key requirements:
    1. human agency and oversight
    2. technical robustness and safety
    3. privacy and data governance
    4. transparency
    5. diversity, non-discrimination and fairness
    6. environmental and societal well-being and
    7. accountability
  • Through the Assessment List for Trustworthy AI (ALTAI), AI principles are translated into an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice.
  • The assessment list for Trustworthy Artificial Intelligence (ALTAI) can be seen here: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=68342. The ALTAI web-based tool is available here: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/welcome-altai-portal

Second European AI Alliance Assembly (October 2020)

  • This year’s edition had a particular focus on the European initiative to build an Ecosystem of Excellence and Trust in Artificial Intelligence (AI).
  • This year’s edition hosted an online, high-level and multi-participatory forum to discuss:
    • The results of the Consultation on the AI White Paper launched by the European Commission from 19 February 2020 to 14 June 2020 and next policy and legislation steps;
    • The finalised deliverables High-Level Expert Group on AI (AI HLEG);
    • The future projections on the European AI Alliance as a multi stakeholder forum that reflects wider societal, economic and technical aspects of AI to the European policymaking process.
  • Watch the plenaries:
    • 1st plenary: https://www.youtube.com/watch?v=zfeQcM6SOAU
    • 2nd plenary: https://www.youtube.com/watch?v=NIsVUI1lovU

Proposal for a Regulation laying down harmonised rules on artificial intelligence (April 2021)

  • The Commission has proposed the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
  • It aims to address risks of specific uses of AI, categorising them into 4 different levels: unacceptable risk, high risk, limited risk, and minimal risk.
  • The AI Regulation will make sure that Europeans can trust the AI they are using. The Regulation is also key to building an ecosytem of excellence in AI and strengthening the EU’s ability to compete globally.

Coordinated Plan on Artificial Intelligence (April 2021)

  • The 2021 Coordinated Plan on Artificial Intelligence is the next step in creating EU global leadership in trustworthy AI.
  • It sets out the strategy to:
    • accelerate investments in AI technologies to drive resilient economic and social recovery aided by the uptake of new digital solutions;
    • act on AI strategies and programmes by fully and timely implementing them to ensure that the EU fully benefits from first-mover adopter advantages;
    • align AI policy to remove fragmentation and address global challenges
  • It will do so by:
    • setting enabling conditions for AI development and uptake in the EU;
    • making the EU the place where excellence thrives from the lab to market;
    • ensuring that AI works for people and is a force for good in society;
    • building strategic leadership in high-impact sectors.
  • The whole plan can be seen in more detail here: https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review

High-Level Conference on AI: From Ambition to Action (November 2021)

  • The event followed up on the Proposal for a Regulation laying down harmonised rules on AI and the updated Coordinated Plan on AI, published by the European Commission in April 2021, as well as the previous editions of the European AI Alliance Assembly.
  • It featured experts and policy makers from EU Member States, third countries, international organisations, academia, civil society as well as business representatives.
  • Due to the COVID19 pandemic, this was a full remote event.
  • Watch all the sessions:
    • Day 1 - on Governance: https://youtu.be/wNTa9Hk9N60
    • Day 1 - on Standardization: https://youtu.be/-sFjVUoPGug
    • Day 1 - on Liability: https://www.youtube.com/watch?v=79xfrc0uXus
    • Day 2 - on Green AI: https://youtu.be/YAEaTcY2OCY
    • Day 2 - on Financing AI innovation: https://youtu.be/8VL6YElnkdY

Launch event for the Spanish Regulatory Sandbox on Artificial Intelligence (June 2022)

  • ‘Bringing the AI Regulation Forward’ was an online event to launch Spain’s pilot for a Regulatory Sandbox on Artificial Intelligence (AI).
  • In April 2021, the European Commission presented a Proposal for a Regulation laying down harmonised rules on AI (https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai). While the legal text is being negotiated by the European Parliament and the Council of the EU, the Government of Spain will deploy an AI Sandbox to put the proposed requirements for high-risk AI systems into practice.
  • The Spanish AI Sandbox will provide practical experience through applying the various features of the proposal to specific AI projects (e.g. requirements, conformity assessments and certain post-market activities) and making guidelines, toolkits and good-practice materials accessible to all. Such actions are expected to be useful for the development of harmonised European standards and the other preparatory work at national and EU level.
  • Session can be seen here: https://youtu.be/mWYGac3DJUw

Liability Rules for Artificial Intelligence (September 2022)

  • In its White Paper on Artificial Intelligence, the Commission undertook to promote the uptake of artificial intelligence and to address the risks associated with certain of its uses.
  • In the Report on Artificial Intelligence Liability, the Commission identified the specific challenges posed by artificial intelligence to existing liability rules.
  • The purpose of the AI Liability Directive proposal is to improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.

Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights (December 2022)

  • Its aim is to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.
  • Definition of an AI system
    • To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the Council’s text narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches.
  • Prohibited AI practices
    • The text extends to private actors the prohibition on using AI for social scoring. Furthermore, the provision prohibiting the use of AI systems that exploit the vulnerabilities of a specific group of persons now also covers persons who are vulnerable due to their social or economic situation.
    • As regards the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, the text clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems.
  • Classification of AI systems as high-risk
    • Regarding the classification of AI systems as high-risk, the text adds a horizontal layer on top of the high-risk classification, to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.
  • General purpose AI systems
    • New provisions have been added to account of situations where AI systems can be used for many different purposes (general purpose AI), and where general purpose AI technology is subsequently integrated into another high-risk system.
    • High-risk AI system rules may also extend to general-purpose AI systems. Instead of applying these rules directly, an implementing act would outline how they should be applied to general-purpose AI systems. This would be determined through consultation, impact assessment, and consideration of system characteristics, value chain, feasibility, and market trends.
  • Transparency and other provisions in favour of the affected persons
    • Newly added provision puts emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.
    • The text also makes it clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI Act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.
  • Measures in support of innovation
    • Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions.
    • New provisions have been added allowing unsupervised real-world testing of AI systems, under specific conditions and safeguards. In order to alleviate the administrative burden for smaller companies, the text includes a list of actions to be undertaken to support such operators, and it provides for some limited and clearly specified derogations. A European approach to artificial intelligence   Ivan Bartos statement

European Parliament’s negotiating position on AI Act (June 2023)

  • The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
  • MEPs (members of the European Parliament) expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
      • “Real-time” remote biometric identification systems in publicly accessible spaces;
    • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
    • biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
    • predictive policing systems (based on profiling, location or past criminal behaviour);
    • emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
    • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
  • MEPs ensured the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.
  • Generative AI systems based on foundation models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.
  • MEPs added exemptions for research activities and AI components provided under open-source licenses. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.
  • Finally, MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
  • A press conference on this was made in 14 June and it can be seen here: https://multimedia.europarl.europa.eu/en/webstreaming/press-conference-by-roberta-metsola-ep-president-brando-benifei-and-dragos-tudorache-rapporteurs-on_20230614-1400-SPECIAL-PRESSER

Commission welcomes political agreement on Artificial Intelligence Act (December 2023)

  • The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:
    • Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens’ rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.
    • High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk.
    • Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
  • General purpose AI:
    • The AI Act introduces specific rules for general-purpose AI models to ensure transparency. Powerful models with potential risks will have extra obligations, including risk management, incident monitoring, model evaluation, and adversarial testing. Industry, scientists, civil society, and others will develop codes of practices to implement these obligations.
    • National authorities will oversee rule implementation at the country level, and a new European AI Office in the European Commission will coordinate at the European level. This AI Office will enforce rules on general-purpose AI models and serve as a global reference point, making it the first body globally to enforce binding AI rules. An independent expert panel will play a key role in issuing alerts on risks and contributing to model classification and testing. A European approach to artificial intelligence   ursula von der leyen on AI