Category: Responsible AI

Name Injection

Name injection is a type of security vulnerability where an attacker manipulates input fields to inject unexpected or malicious names into a system. This can happen when software uses user-supplied data to generate or reference variables, files, or database fields without proper validation. If not handled correctly, name injection can lead to unauthorised access, data…

Human Rating

Human rating is the process of evaluating or scoring something using human judgement instead of automated systems. This often involves people assessing the quality, accuracy, or usefulness of content, products, or services. Human rating is valuable when tasks require understanding, context, or subjective opinions that computers may not accurately capture.

Annotator Scores

Annotator scores are numerical ratings or evaluations given by people who label or review data, such as texts, images or videos. These scores reflect the quality, relevance or accuracy of the information being labelled. Collecting annotator scores helps measure agreement between different annotators and improves the reliability of data used in research or machine learning.

Bias Control

Bias control refers to the methods and processes used to reduce or manage bias in data, research, or decision-making. Bias can cause unfair or inaccurate outcomes, so controlling it helps ensure results are more reliable and objective. Techniques for bias control include careful data collection, using diverse datasets, and applying statistical methods to minimise unwanted…

Persona Control

Persona control is the ability to guide or manage how an artificial intelligence system presents itself when interacting with users. This means setting specific characteristics, behaviours or tones for the AI, so it matches the intended audience or task. By adjusting these traits, businesses and developers can ensure the AI’s responses feel more consistent and…