Category: Responsible AI

Ethical AI Layer

An Ethical AI Layer is a set of rules, processes, or technologies added to artificial intelligence systems to ensure their decisions and actions align with human values and ethical standards. This layer works to prevent bias, discrimination, or harmful outcomes from AI behaviour. It can include guidelines, monitoring tools, or automated checks that guide AI…

Carbon Tracker

A carbon tracker is a tool or system used to measure, monitor, and report the amount of carbon dioxide and other greenhouse gases produced by activities, organisations, or products. It helps individuals and companies understand their environmental impact by tracking emissions over time. Carbon trackers are often used to support efforts to reduce carbon footprints…

Policy Intelligence

Policy intelligence refers to the process of gathering, analysing, and interpreting information about public policies, regulations, and political developments. It helps organisations, businesses, and governments understand how current or upcoming policies might impact their operations or goals. By using data and expert insights, policy intelligence supports better decision making and strategic planning.

Name Injection

Name injection is a type of security vulnerability where an attacker manipulates input fields to inject unexpected or malicious names into a system. This can happen when software uses user-supplied data to generate or reference variables, files, or database fields without proper validation. If not handled correctly, name injection can lead to unauthorised access, data…

Human Rating

Human rating is the process of evaluating or scoring something using human judgement instead of automated systems. This often involves people assessing the quality, accuracy, or usefulness of content, products, or services. Human rating is valuable when tasks require understanding, context, or subjective opinions that computers may not accurately capture.

Annotator Scores

Annotator scores are numerical ratings or evaluations given by people who label or review data, such as texts, images or videos. These scores reflect the quality, relevance or accuracy of the information being labelled. Collecting annotator scores helps measure agreement between different annotators and improves the reliability of data used in research or machine learning.

Bias Control

Bias control refers to the methods and processes used to reduce or manage bias in data, research, or decision-making. Bias can cause unfair or inaccurate outcomes, so controlling it helps ensure results are more reliable and objective. Techniques for bias control include careful data collection, using diverse datasets, and applying statistical methods to minimise unwanted…