Writing, speaking, and translating the future

Privacy, GDPR, and EU AI Act

There are several questions that arise around privacy and AI. This post will be a high-level overview of the topics that I will further discuss in future posts.

  1. What is the difference between data protection and data privacy?
  2. What is the relationship between EU AI Act and GDPR?
  3. What responsibilities and obligations do enterprises have if they implement AI?
  4. What can enterprises do to meet their obligations to protect privacy implement AI?

Difference between Data Protection and Data Privacy

Data protection is built into GDPR but it is the physical, technical, and processes that an organization puts in place to protect the data they collect and store.  Physical locks on doors, ACLs restricting access to PII, benchmarking processes to restrict the number of employees that have access to data, and encryption are all data protection mechanisms that are part of the GDPR.

Data Privacy are laws that enshrine the rights of the data subjects – the users whose data is being protected by an organization. GDPR, in effect since 2018, is the progenitor of many other country and state level privacy laws. Sixteen US states will have laws that mirror GDPR by 2026. And laws in Brazil, Israel, UK, and many other countries around the world mirror the GDPR laws.

The GDPR defines controllers and processors of personal data and their obligations; delineates data subject rights; requires data leakage risk assessments; and, mandates enterprises use processes or procedures to secure user data especially PII, health, and biometric data.

Relationship between EU AI Act and GDPR

The EU AI Act provides processes for companies (providers and deployers of AI) to classify risk level of AI (prohibited, high low), and identifies enterprise obligations to track experiments, deployments, and performance of AI (particularly high-risk systems). The largest onus is on the providers of AI though deployers have obligations especially for high-risk AI deployments.  The data used for improving or customizing LLMs or ML models may be restricted PII or personal data that falls under the GDPR.

GDPR as described above has four major components 1. Data controller and processor requirements. 2. Data Protection Processes and documentation 3. Data Subject Rights 4. Data Privacy risk assessment of leaked data.

Enterprise Responsibilities and Obligations for AI implementation

The providers of these systems will have the most obligations. However, deployers of AI must document their use, performance, and problems with high-risk AI categories.  This documentation includes how it is being used, how it was trained, and how it was maintained among other things. Teams should expect to maintain a database about use, experimentation and deployment.

Meeting AI use and deployment obligations

New products on the market like Credo.ai aim to remove the onus for enterprises the same way OneTrust addresses privacy issues for GDPR, but since enforcement does not begin until August 2026 changes to the processes and documentation should be expected. A simple google doc or SQL database may be enough to manage the risk and regulatory obligations.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.