There’s no denying that Artificial Intelligence (AI) is now an integral part of modern business – making most of our lives easier and offering transformative capabilities across many sectors. Yet, as AI relies on personal data to function, it also poses significant legal challenges.

In the UK, data protection laws are stringent, with the Data Protection Act 2018 (UK GDPR) at the forefront of safeguarding people’s rights. That’s why it’s crucial that any organisation wishing to make the most of AI – and all the possibilities it has to offer – considers the repercussions of leveraging personal data.

In this blog, we explore the legal considerations that companies must keep in mind when using personal data within AI, and offer essential tips for responsible data management.

Remaining on the Right Side of the Law

Like every other technology that’s made its way into our lives, AI carries a very real risk to data privacy if it’s not used responsibly. Generative AI may be novel, but the regulations surrounding data protection are the same – and so are people’s feelings about how their personal information is used.

Just last year, facial recognition company Clearview AI was fined over £7.5m by the UK’s privacy watchdog for gathering images from the internet to create a global facial recognition database.

The Information Commissioner’s Office (ICO) said that the practice breached UK data protection laws, and ordered the firm to stop obtaining and using the personal data of UK residents.

Fortunately, it is possible to innovate while still respecting people’s privacy; you just have to understand the basics of data protection. Because without that understanding, your company’s reputation could quickly go up in a very costly smoke.

1 Compliance with Data Protection Laws

Data protection laws in the UK, particularly The Data Protection Act 2018, are applicable to any organisation that processes personal data – and AI systems are no exception. Anyone responsible for using personal data must make sure the information is:

  • Used fairly, lawfully and transparently
  • Used for specified, explicit purposes
  • Used in a way that’s adequate, relevant and limited to only what’s necessary
  • Accurate and, where necessary, kept up to date
  • Kept for no longer than is necessary
  • Handled in a way that ensures appropriate security, including protection against unlawful or unauthorised processing, access, loss, destruction or damage

2 Data Minimisation

When integrating AI into your operations, remember the principle of data minimization: avoid collecting excessive or unnecessary personal data. The difficulty here lies in the fact that AI development often involves processing huge swathes of data – particularly as the application or function of AI systems can alter during their development. As the data minimization principle requires sufficient efforts to determine the needs of the algorithm, these needs ought to be defined as clearly as possible beforehand. Outlining objectives for the application and use of AI is crucial for compliance.

3 Lawful Basis for Data Processing

Establish a lawful basis for processing personal data within AI systems. The most common bases are consent, contract performance, compliance with legal obligations, and legitimate interests pursued by the organisation or a third party.

  • Consent may be appropriate when you have a direct relationship with the person whose data you want to process, but you must ensure that consent is freely given.
  • Contract performance can be used as a lawful basis if processing using AI is ‘objectively necessary’ to carry out a contractual service. Such as the use of someone’s name by an AI chatbot when responding to customer requests.
  • Legitimate interests may be the most flexible lawful basis for processing, but it’s not always appropriate – for example, if the way you intend to use data is unexpected or could cause unnecessary harm or distress to the individual.

4 Data Accuracy and Transparency

AI algorithms learn and make decisions based on the data they receive. It’s crucial then to ensure the accuracy of the data used to train AI models. Additionally, organisations should be transparent with people about how their data is being used to build trust and comply with transparency requirements under Data Protection Legislation.

It should be noted that even when the data used to train AI systems is accurate, the output can be subject to unanticipated effects from the algorithm – sometimes leading to inaccurate information. To avoid personal data being misconstrued as fact, your records should show that they’re merely ‘statistically informed guesses’.

5 Individual Rights

Respect individuals’ rights regarding their personal data. The Data Protection Legislation grants data subjects the right to access their data, correct inaccuracies, request deletion, and object to processing. Organisations must have mechanisms in place to address these rights when using AI systems. Without them, you could find yourself in hot water. As AI systems sometimes make automated decisions that can have a legal – or significant – impact on individuals, data subjects must be given the right to challenge and request human intervention in these cases.

Image-of-hands-at-laptop-with-graphic-of-magnifying-glass-and-password-box

6 Data Security Measures

AI systems often process large volumes of sensitive data, making them potential targets for cyberattacks. Your should implement robust data security measures to safeguard personal data from unauthorized access, disclosure, or alteration. You should also carry out regular security audits and train your employees on data protection best practices.

7 Data Protection Impact Assessments (DPIAs)

DPIAs must be carried out every time new technology is introduced that could impact personal data – making it necessary to perform a DPIA when implementing any AI system. A DPIA helps to identify and mitigate potential privacy risks, demonstrating your commitment to responsible data management.

8 Third-Party Agreements

If you’re using third-party AI services or collaborating with other organisations, we’d recommend drafting comprehensive data processing agreements that outline the responsibilities of all parties involved.  As people are using AI tools that are general use and open source – where there is no data processing agreement – strong policies governing the use of those tools in business should be put in place, supported by effective ways to communicate those policies to staff and mechanisms put in place to check their compliance.

9 Retention and Disposal Policies

Establish data retention and disposal policies aligned with legal requirements. Personal data should not be kept longer than necessary for the purposes for which it was collected. When data is no longer required, dispose of it securely.  However, as many open-source tools state that any information entered can be reused for training purposes, organisations should ensure they refrain from entering any personal data that could be used inappropriately.

10 Privacy by Design

While the large amounts of data required by AI and machine learning systems seem to be the antithesis of data privacy, embedding the best possible data privacy settings and technical means – such as data pseudonymisation and anonymisation – into AI processes can help to ensure ethical usage.

Balancing Innovation and Ethics

The marriage of AI and personal data offers remarkable possibilities for organisations all over the world. However, this alliance comes with great responsibility. By adhering to data protection laws, practising data minimisation, and implementing robust security measures, organisations can leverage AI ethically and responsibly.

That’s why we created PRISM: a powerful tool that enables organisations to analyse and manage risks – including the risks associated with AI systems – so that organisations can take a process-driven approach to data protection and information security.

Remember that protecting personal data is not just a legal obligation, but also a commitment to building trust with your customers and stakeholders. Responsible AI usage is a crucial step towards a more ethical and privacy-conscious future.

As technology continues to evolve, so will data protection laws and best practices. Staying informed about changes in the legal landscape and continuously improving data management processes will ensure that your AI-powered organisation remains compliant and trustworthy in the eyes of consumers.

Need help managing the safe use of personal data?

Contact our team for a free consultation.

Read more

Blog – AI and Personal Data: Harnessing Personal Data for Progress

Blog – The Crucial Role of Auditing in Data Protection Management Systems