Category: Prompt Engineering

Prompt-Based Exfiltration

Prompt-based exfiltration is a technique where someone uses prompts to extract sensitive or restricted information from an AI model. This often involves crafting specific questions or statements that trick the model into revealing data it should not share. It is a concern for organisations using AI systems that may hold confidential or proprietary information.

Secure Chat History Practices

Secure chat history practices are methods and rules used to keep records of chat conversations private and protected from unauthorised access. These practices involve encrypting messages, limiting who can view or save chat logs, and regularly deleting old or unnecessary messages. The goal is to prevent sensitive information from being exposed or misused, especially when…

Embedding Sanitisation Techniques

Embedding sanitisation techniques are methods used to clean and filter data before it is converted into vector or numerical embeddings for machine learning models. These techniques help remove unwanted content, such as sensitive information, irrelevant text, or harmful language, ensuring that only suitable and useful data is processed. Proper sanitisation improves the quality and safety…

Confidential Prompt Engineering

Confidential prompt engineering involves creating and managing prompts for AI systems in a way that protects sensitive or private information. This process ensures that confidential data, such as personal details or proprietary business information, is not exposed or mishandled during interactions with AI models. It includes techniques like redacting sensitive content, using secure data handling…

Digital Rights Platform

A digital rights platform is an online system or service that helps creators, rights holders, and organisations manage, protect, and distribute their digital content. It tracks who owns what content, handles permissions, and automates licensing or payments. These platforms are used for music, videos, images, books, and other digital media to ensure creators are paid…

Context Leakage

Context leakage occurs when information from one part of a system or conversation unintentionally influences another, often leading to confusion, privacy issues, or errors. This typically happens when data meant to remain confidential or isolated is mistakenly shared or accessed in situations where it should not be. In computing and artificial intelligence, context leakage can…

Decentralized Trust Frameworks

Decentralised trust frameworks are systems that allow people, organisations or devices to trust each other and share information without needing a single central authority to verify or control the process. These frameworks use technologies like cryptography and distributed ledgers to make sure that trust is built up through a network of participants, rather than relying…