The emergence of artificial intelligence (AI) has raised complex questions about traditional patent law, particularly regarding who can be recognised as an inventor. While patent systems have historically granted exclusive rights to human inventors in exchange for public disclosure of their creations, the increasing contribution of AI systems to groundbreaking innovations has led to a critical question: Can an AI system be considered an "inventor" within the existing patent framework?

Unlike copyright law, which hinges on human authorship, patent law assesses inventions objectively, focusing on novelty, utility, and inventiveness. However, the challenge lies in ownership, as patents are typically granted to human inventors or legal entities. The Artificial Inventor Project, led by Dr. Stephen Thaler, filed test patent applications for AI-generated inventions, but courts and patent offices across various countries rejected them, asserting that an inventor must be a human or a legal entity.

The UK Intellectual Property Office's 2022 consultation explored reform options, including expanding the definition of 'inventor' to encompass AI. Despite these considerations, a majority favoured maintaining the status quo, citing concerns about AI's current inability to invent without human intervention. The evolving debate extends beyond inventorship, raising questions about the hypothetical person skilled in the art (PSA) and whether the PSA should be considered to have access to AI systems, potentially recalibrating the threshold for inventiveness. Moreover, challenges related to sufficiency or enablement arise when an AI system creates an invention, as the lack of human understanding poses disclosure challenges.

In the realm of copyright law, the rise of generative AI programs like DALL-E, ChatGPT, Stable Diffusion, and others has stirred questions about the copyrightability of AI outputs. The Copyright Act traditionally recognizes copyright in works "created by a human being," leaving AI-generated outputs in a legal gray area. While companies attempt to clarify ownership through contractual terms, the absence of clear rules or judicial decisions complicates matters. The legal disputes surrounding AI-generated works underscore the need for legislative amendments to address the ownership and authorship of these creations.

The pressing need for effective AI regulation is a global concern, with distinct approaches taken by major players like the European Union (EU), the United States, and China. The EU proposes a risk-based model through the Artificial Intelligence Act, categorising AI tools based on potential risk and imposing stringent requirements for high-risk applications. In contrast, the US lacks comprehensive federal AI laws, preferring self-regulation. China, taking a proactive stance, has introduced laws focusing on transparency and unbiased use of personal data.

Other countries, such as Canada and the UK, are also contemplating AI regulations, with different legislative approaches. International organisations like the Council of Europe and the United Nations are considering agreements and new bodies to govern AI. Challenges in AI regulation include enforcing transparency, defining high risk, and addressing the global nature of AI. The opaque nature of many machine-learning systems poses difficulties in enforcing auditing and transparency regulations, emphasising the need for international collaboration and effective enforcement mechanisms.

The UK's AI sector has significantly contributed to the economy, with over 3,000 companies generating £10.6 billion in revenue and employing over 50,000 people. Private investment, amounting to £18.8 billion since 2016, reached a record £5 billion in 2021. The sector, diverse in size and revenue generation, faces challenges related to transparency, bias, privacy concerns, and ethical considerations. The impact on employment is notable, with 7% of jobs facing high automation risk in the next five years.

In response to these challenges, the UK government published its National AI Strategy, outlining a ten-year plan with three main objectives: investing in the AI ecosystem, supporting the transition to an AI-enabled economy, and ensuring proper governance. The strategy emphasises a "pro-innovation" approach to regulation, highlighting safety, security, transparency, fairness, accountability, and redress as guiding principles. The government aims for proportionate regulation, especially for smaller businesses and startups in the AI sector, and envisions collaboration with regulators during the initial implementation. As the legal and regulatory landscape of AI continues to evolve, the UK stands at the forefront of addressing the complex interplay between technological advancements and the need for effective governance. Navigating the challenges of inventorship, copyright protection, and AI regulation requires a delicate balance between fostering innovation and mitigating potential risks.

The field of artificial intelligence is witnessing an increase in patenting activity at a remarkable pace. Since the inception of AI in the 1950s, inventors and researchers have produced over 1.6 million scientific articles and submitted around 340,000 inventive patent applications. More than 50% of these inventions have been published after 2013, indicating a fast-growing market for AI-related patents. Among the top 30 AI patent applications, 26 are from businesses, while four are from universities or public research institutions. IBM leads the pack with the largest portfolio of 8,290 AI patent applications, followed by Microsoft with 5,930 applications, Toshiba with 5,223, Samsung with 5,102, and NEC with 4,406. Computer vision, which includes image recognition and is crucial for the development of self-driving cars, is cited as the most common use in 49% of all patents related to artificial intelligence. The transportation industry, including autonomous cars, has one of the fastest-growing rates of AI-related growth, with 8,764 submissions in 2016, up from 3,738 filings in 2013, or a 134% annual growth rate. Meanwhile, the life and medical sciences industry recorded a 12% average annual growth rate, where AI may be used for robotic surgery and medication personalisation. In 2016, there were 4,112 filings, up 40% from 2,942 in 2013.

The development of AI has led to a debate about how society should value human creativity and invention, and what changes to the current intellectual property regimes are necessary. The issue is complicated by the fact that AI lacks legal personality, which means that it cannot be credited as the inventor of a work or claim ownership of intellectual property rights. This raises questions about who should be credited with creating a work of art or invention that was made with AI assistance. Additionally, it can be difficult to assign ownership of a particular creative work or innovation. The law currently only allows credit for innovations or creations to be given to human beings. Furthermore, it can be challenging for rights holders to pinpoint a person or business that may be held legally accountable for AI conduct. AI technology may be employed to assist in issuing intellectual property rights by government agencies and by those who possess such rights to better monitor for infringements and defend their rights. As AI technology advances and its usage grows more widespread, intellectual property law will need to adapt to address the issues it raises.

💡
Written by Shivani
Important Announcement: Thank you for your interest in IP Wave till now. We're happy to announce that from now on, ReadIPWave news letter will be published twice a week, i.e on Wednesday and Saturday. We are looking forward to providing you with more targeted and impactful insights. In our saturday publications we will explore IP and innovation through the lens of Behavioural Science. More articles related to behavioural science and it's application will soon be available to read on our website. Keep reading the latest from ReadIPWave.
Share this post