• EDPB Annual Report 2024

    23 April 2025
    Publication Type:

    Protecting personal data in a changing landscape

  • Alfa Laval Group

    Type of BCR:
    Controller
    2025
    SE SA
    Categories of data subjects
    Employees
    Contractors
    Clients, customers
    Suppliers, service providers
    Other third parties as part of the Group’s respective regular business activities
    Approval decision (EN) 100.7KB
    Pre-GDPR
    No
  • AI Privacy Risks & Mitigations Large Language Models (LLMs)

    10 April 2025

    The AI Privacy Risks & Mitigations Large Language Models (LLMs) report puts forward a comprehensive risk management methodology for LLM systems with a number of practical mitigation measures for common privacy risks in LLM systems. 
    In addition, the report provides use cases examples on the application of the risk management framework in real-world scenarios:

    • first use case: a virtual assistant (chatbot) for customer queries,
    • second use case: LLM system for monitoring and supporting student progress and,
    • third use case: AI assistant for travel and schedule management.

    Large Language Models (LLMs) represent a transformative advancement in artificial intelligence. They  are deep learning models designed to process and generate human-like language trained on extensive datasets. Their applications are diverse, ranging from text generation and summarisation to coding assistance, sentiment analysis, and more.

    The EDPB launched this project in the context of the Support Pool of Experts programme at the request of the Croatian Data Protection Authority (DPA). 

    Project completed by the external expert Isabel Barbera in April 2025.

    Objective

    The AI Privacy Risks & Mitigations Large Language Models (LLMs) report puts forward a comprehensive risk management methodology to systematically identify, assess, and mitigate privacy and data protection risks. 

    The report helps Data Protection Authorities (DPAs) to have a comprehensive understanding and state-of-the-art information on the functioning of LLMs systems and the risks associated with LLMs.