Category: Responsible AI

Prompt Lifecycle Governance

Prompt Lifecycle Governance refers to the structured management of prompts used with AI systems, covering their creation, review, deployment, monitoring, and retirement. This approach ensures prompts are effective, up to date, and compliant with guidelines or policies. It helps organisations maintain quality, security, and accountability in how prompts are used and updated over time.

Agent Mood Modulation

Agent mood modulation refers to the ability of artificial agents, such as robots or virtual assistants, to adjust their displayed emotional state or mood. This can help make interactions with humans feel more natural and engaging. By altering their responses based on mood, agents can better match the emotional tone of a conversation or environment,…

Zero-Shot Policy Simulation

Zero-Shot Policy Simulation is a technique where artificial intelligence models predict the outcomes of policies or decisions in scenarios they have not seen during training. It allows simulation of new policies without needing specific data or examples from those policies. This approach is valuable for testing ideas or rules quickly, especially when collecting real-world data…

AI Ethics Simulation Agents

AI Ethics Simulation Agents are digital models or software programs designed to mimic human decision-making in situations that involve ethical dilemmas. These agents allow researchers, developers, or policymakers to test how artificial intelligence systems might handle moral choices before deploying them in real-world scenarios. By simulating various ethical challenges, these agents help identify potential risks…

Embedded LLM Validators

Embedded LLM Validators are programs or modules that check the outputs of large language models (LLMs) directly within the application where the model is running. These validators automatically review responses from the LLM to ensure they meet specific requirements, such as accuracy, safety, or compliance with rules. By being embedded, they work in real time…

Prompt Policy Enforcement Points

Prompt Policy Enforcement Points are specific locations within a system where rules or policies about prompts are applied. These points ensure that any prompts given to an AI or system follow set guidelines, such as avoiding harmful or inappropriate content. They act as checkpoints, verifying and enforcing the rules before the prompt is processed or…

Compliance via Prompt Wrappers

Compliance via prompt wrappers refers to the method of ensuring that AI systems, such as chatbots or language models, follow specific rules or guidelines by adding extra instructions around user prompts. These wrappers act as a safety layer, guiding the AI to behave according to company policies, legal requirements, or ethical standards. By using prompt…

LLM Acceptable Use Criteria

LLM Acceptable Use Criteria are guidelines that set out how large language models can be used responsibly and safely. These criteria help prevent misuse, such as generating harmful, illegal, or misleading content. They are often put in place by organisations or service providers to ensure that users follow ethical and legal standards when working with…