Securing Intellectual Property
The emergence of Artificial Intelligence (AI) is profoundly transforming our society, providing groundbreaking solutions to complex data analysis and content generation challenges. Securing intellectual property became a paramount priority for the business.
However, its application raises complex issues concerning data privacy and Intellectual Property (IP) rights. Effective strategies are necessary to navigate the intricate landscape of AI, data privacy, and IP rights and ensure a safe, beneficial environment for all stakeholders for securing intellectual property.
Evalueserve IP and R&D
Drive Innovation with Strategic IP and R&D Solutions
The Risks to Data Privacy and IP in the AI Context
AI, as a transformative technology, is fundamentally data-dependent. With an extensive reservoir of training data, AI strives to model human behaviour and creativity and generate the desired output. To make the AI model as precise as possible, more extensive and accurate with curated data sets become the primary need. However, as AI systems analyze vast amounts of data to learn and make predictions, the need for curated data opens the risks of privacy and copyright infringements. Ringfencing approved data sources presents many challenges, and the “black box” nature of many AI architectures means it can be tricky to prove whether IP infringement is even occurring, as it is challenging to prove which data sets were used to augment the model’s outcomes.
AI’s hunger for data puts immense pressure on data privacy compliance. This issue is particularly acute with Machine Learning (ML), Large Language Models (LLMs) and Deep Learning (DL) systems that train on vast datasets. Data collection and usage could violate privacy laws and norms, risking reputational damage and legal actions.
Simultaneously, AI presents significant risks to IP rights. As AI becomes more capable of creating content, the question of who owns the resulting IP is more challenging. IP infringements become a significant concern. AI algorithms could replicate protected content or create similar works without permission. Certain things become public so rapidly that securing intellectual property is only considered afterwards.
For example, a group of visual artists are currently pursuing a class action lawsuit against the creators of Stability AI, Midjourney and DeviantArt for just such alleged infringements. The results of this suit will likely create significant legal precedents. The case may hinge on the definition and application of “fair use,” a legal doctrine that permits the repurposing of an IP owner’s work by another creator to generate new work. Posted in February 2022, the above suit remains in progress at the time of writing. Products based on or derived from an original and the legal challenge of providing evidence for what was precisely utilized.
AI and Securing Intellectual Property
AI also risks trade secrets, an essential aspect of IP.
IP owners risk losing their exclusive rights because AI models trained on proprietary data could inadvertently reveal this protected information.
It is in the interest of every company developing AI systems to protect their property against such accidental data breaches.
Solutions and Strategies for Protecting Data Privacy and IP in the AI Era
Addressing the challenges associated with AI, data privacy, and IP requires a multi-pronged strategy. Given the novelty inherent in the current situation, IP lawyers need help to keep up.
One recent joint project of the Zurich University’s Center for Intellectual Property and Competition Law and the Swiss Intellectual Property Institute may offer some hope of clarity.
Their initial recommendations included the following:
- The legal classification should recognize AI systems as “inventors,” a term used to denote the originator of a piece of technology. This situation would include creating novel proteins or fragments of code, which could then receive protection under patent law.
- Human authorship should prevail as the primary criterion for being recognized as the “owner” of a piece of IP.
In other words, AI-generated content without significant human creative contributions would not receive protection under copyright law.
The researchers, however, add, “Granting copyright protection for content collectively created by an AI system and a natural person is possible, provided that the human contribution is sufficiently creative.”
- Companies should be able to acquire ownership of AI-generated and patentable IP (with the AI designated as an inventor, as described above).
- There is no need to create new IP rights for AI output. If AI-generated intellectual property with little or no human authorship increases, stakeholders may reconsider this situation.
- Research requires permissive protection “carve-outs” for AIs drawing upon existing IP sources. In other words, establishing an environment where AI owners could face lawsuits for unintentional copyright or patent law breaches would not be beneficial.
- There should be a new legal framework for using personal and non-personal data, which minimizes harm and privacy breaches.
Putting Theory into Practice: Implementation
How could the Swiss project’s last recommendation be enacted in practice?
1. First, AI developers should prioritize privacy-by-design approaches integrating data privacy considerations into initial AI design.
2. Organizations can employ anonymization and pseudonymization strategies to protect personal data for AI training. Such methods help ensure data privacy compliance while reducing the risk of infringing private data rights.
3. Furthermore, organizations should adopt IP strategies to address potential infringements by AI. Organizations could utilize licensing agreements to specify the proper usage of AI-generated content. Organizations can create models that rely solely on clearly defined datasets, minimizing the inclusion of personal data.
4. The concept of AI companies collectively adopting optimal guidelines that consider both privacy and copyright aspects.
Employing such arrangements would help protect the rights of original IP holders and govern the use of their content by AI algorithms.
Another crucial strategy is the utilization of AI explicability and transparency mechanisms. By making AI processes and decisions as transparent as possible, stakeholders can monitor their output to ensure that AI does not infringe on IP rights or violate data privacy.
However, the inherent unknowability of some generative actions of cutting-edge AIs inevitably limits the effectiveness of this strategy. In short, even their creators can’t say for sure how these systems generate the content they do.
As the Swiss research project highlighted, policymakers should strive to keep IP laws updated with AI advances. Additionally, it is essential to have a higher frequency of reviewing regulations to keep up with the pace of AI development.
Current IP laws frequently fail to consider the capabilities of AI, thereby creating grey areas that opportunists could exploit. Policymakers must update laws to explicitly include AI, defining ownership and liability rules for AI-generated content.
The Whitehouse Weighs In the 2023 White House National Cybersecurity Strategy
In March 2023, the US government’s National Institute of Standards and Technology (NIST) included the following suggestion in their AI Risk Management Framework: “AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts.”
The NIST report echoed concerns voiced in the government’s 2023 White House National Cybersecurity Strategy, which further stated: “The widespread introduction of artificial intelligence systems—which can act in ways unexpected to even their creators—is heightening the complexity and risk associated with many of our most important technological systems.”
Given the strategic priorities for cybersecurity, both domestically and in terms of foreign agents, it behoves companies to get ahead of the curve in designing and implementing AI systems.
As a University of Pennsylvania Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS) report concluded: “While there is no one-size-fits-all approach, practices institutions might consider adopting to mitigate AI risk include oversight and monitoring, enhancing explainability and interpretability, as well as exploring the use of evolving risk-mitigating techniques like differential privacy, and watermarking….”
These are laudable aims, of course. They may be more challenging to implement.
Challenges and Future Implications
Despite the abovementioned strategies and suggestions for how to mitigate risk, several challenges persist.
One of the most significant is the difficulty balancing the need for data to fuel AI systems and respecting data privacy. This balance becomes particularly precarious when dealing with sensitive personal data.
Moreover, IP laws across different may conflict, complicating IP protection in the AI context. Indeed, the difference in jurisdictions and shift of companies to evade these should be looked at on a global scale to have an actual effect. That said, complicated is not impossible; it is time-consuming and costly.
Future implications of AI on data privacy and IP are vast and somewhat unpredictable. As AI becomes more integrated into society, the scale and complexity of data privacy and IP issues will likely increase.
By adequately addressing these challenges, we can protect the widespread adoption of AI.
On the other hand, the convergence of AI, data privacy, and IP rights could lead to innovative solutions, creating a symbiotic relationship between AI advancement, data privacy, and securing intellectual property.
While AI brings considerable data privacy and IP risks, preventative measures can mitigate such risks.
Regarding AI’s future, external constraints and internal caution will likely combine to protect every organization’s most vital resource: its IP.