October 7, 2024

Patent Drafting with AI: An EU AI Act Perspective

Artificial intelligence (AI) is already having a substantial impact in the practice of Intellectual Property (IP) Law, with platforms such as Solve Intelligence's Patent Copilot assisting attorneys in drafting and prosecuting patent applications. These AI platforms can help patent attorneys realise efficiency gains and help to provide high-quality patents. 

Until earlier this year, the use of AI was largely unregulated across the world. Now, the picture has somewhat changed, with different countries implementing different strategies when it comes to regulating AI, to promote safety but also to remain competitive. Earlier this year, the Artificial Intelligence Act entered into force in the EU, becoming the world's first comprehensive regulation for AI. In this article we have a look at the obligations that the EU AI Act puts on AI technology providers, such as providers of AI patent drafting and prosecution tools.

Patent Drafting with AI: An EU AI Act Perspective

The EU AI Act establishes a regulatory framework for AI systems within the European Union, ensuring that AI technology is used safely and responsibly. The Act classifies AI systems based on the potential risks they pose to safety, fundamental rights, and public interests, creating four risk levels: prohibited, high-risk, limited-risk, and minimal-risk. This tiered approach is designed to vary the degree of regulation based on an AI system's potential impact.

1. Prohibited AI Systems

These systems are considered to pose an unacceptable risk by the AI Act and are thus prohibited from use. These may include, for example, AI systems for biometric data scraping in public spaces or over the web. Other prohibited AI systems include social scoring systems, which may use AI to rank or assess people based on their behaviour. Similarly, AI systems used for emotional recognition in the workplace, education, and law enforcement are prohibited. Furthermore, prohibited AI systems include those designed to manipulate human behaviour or exploit vulnerabilities in ways that could cause harm.

2. High-Risk AI Systems

High-risk systems are those used in sensitive areas where errors or misuse could lead to significant harm. These include AI applications in critical infrastructure (e.g., energy or transport systems), law enforcement, healthcare (e.g., medical diagnostics), and employment (e.g., AI used to evaluate job candidates). Other examples of AI systems deemed to be high-risk include AI systems used in democratic processes and in education, when used as a tool for assessing students.

Providers of High-Risk AI Systems must comply with a range of strict requirements set out by the AI Act. These requirements include:

  • Technical Documentation: Providers must create detailed technical documentation for the system, essentially a comprehensive "manual" that includes specific, mandatory information about how the AI operates.

  • Transparency: Such high risk systems must be accompanied by detailed instructions for use, to ensure users fully understand its functions.

  •  Human Oversight: These high risk systems must allow for human oversight including allowing human operators to intervene and stop the system when necessary. 

  • Risk Management: Providers must implement a process throughout the system's lifecycle that can identify and mitigate any associated risks.

  • Data Measures: Training and testing of such high-risk systems must adhere to strict data governance protocols, ensuring that the data used is of high quality and free from bias.

  • Robustness, Accuracy, and Cybersecurity: High risk AI systems must be resilient, demonstrate accuracy, and be robust against cyberattacks.

  • Quality Management: Providers must implement a comprehensive quality management process to ensure the consistent reliability of the high risk AI system.

  • Record-Keeping: The AI system must be designed to automatically log certain events, such as usage periods and input data. Providers are required to retain these logs for specific durations as defined by the regulatory framework.

  •  Monitoring: Providers must have a system in place to collect and analyze performance data throughout the AI system’s lifecycle, based on user feedback and real-world performance.

3. General Purpose AI (GPAI)

General Purpose AI (GPAI) refers to AI models that exhibit significant generality and can competently perform a wide array of distinct tasks. These models, often trained on large datasets using self-supervised learning techniques, are versatile and can be integrated into various downstream systems or applications. However, it's important to note that this definition excludes AI models intended for research, development, or prototyping activities prior to their market release.

Given their adaptability, GPAI systems can sometimes be used with high-risk AI systems or be integrated into them, necessitating collaboration between GPAI system providers and those offering high-risk systems to ensure compliance with relevant regulations.

The providers of GPAI have the following obligations under the EU AI Act:

  • Providers must draw up technical documentation, including training and testing process and evaluation results.

  • Providers must draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.

  • Providers must establish a policy to respect the Copyright Directive.

  • Providers must publish a sufficiently detailed summary about the content used for training the GPAI model.

4. Limited risk and Minimal risk AI Systems

The primary requirement for all other AI systems is an obligation of transparency. Providers of these other AI systems must ensure that AI systems intended for interaction with individuals are designed and developed to make users aware that they are engaging with an AI system. Another general obligation is that they should ensure that the personnel responsible for operating and utilizing AI systems possess adequate AI literacy. This is dependent on the specific context in which the AI systems are employed.

Conclusion

The EU AI Act represents a comprehensive approach to regulating AI technologies, ensuring they are used safely and responsibly while promoting innovation. It is important for providers of AI-related services, tools, and models to adhere to the requirements set out by the EU AI Act, and more generally, the regulatory environment concerning AI around the world.

Here at Solve, security is our number one priority, and it will be throughout the development of our platform. If you have any questions regarding our policies in this regard, or with respect to the EU AI Act, please feel free to reach out.

Patent Drafting with AI: EU AI Act Perspective

Explore how AI transforms patent drafting under the EU AI Act, highlighting its impact on innovation and compliance.