Category: AI Ethics & Bias

Feedback Tags

Feedback tags are short labels or keywords used to categorise, summarise, or highlight specific points within feedback. They help organise responses and make it easier to identify common themes, such as communication, teamwork, or punctuality. By using feedback tags, individuals and organisations can quickly sort and analyse feedback for trends or actionable insights.

Response Divergence

Response divergence refers to the situation where different systems, people or models provide varying answers or reactions to the same input or question. This can happen due to differences in experience, training data, interpretation or even random chance. Understanding response divergence is important for evaluating reliability and consistency in systems like artificial intelligence, surveys or…

Human Rating

Human rating is the process of evaluating or scoring something using human judgement instead of automated systems. This often involves people assessing the quality, accuracy, or usefulness of content, products, or services. Human rating is valuable when tasks require understanding, context, or subjective opinions that computers may not accurately capture.

Annotator Scores

Annotator scores are numerical ratings or evaluations given by people who label or review data, such as texts, images or videos. These scores reflect the quality, relevance or accuracy of the information being labelled. Collecting annotator scores helps measure agreement between different annotators and improves the reliability of data used in research or machine learning.

Label Errors

Label errors occur when the information assigned to data, such as categories or values, is incorrect or misleading. This often happens during data annotation, where mistakes can result from human error, misunderstanding, or unclear guidelines. Such errors can negatively impact the performance and reliability of machine learning models trained on the data.

Bias Control

Bias control refers to the methods and processes used to reduce or manage bias in data, research, or decision-making. Bias can cause unfair or inaccurate outcomes, so controlling it helps ensure results are more reliable and objective. Techniques for bias control include careful data collection, using diverse datasets, and applying statistical methods to minimise unwanted…

Label Calibration

Label calibration is the process of adjusting the confidence scores produced by a machine learning model so they better reflect the true likelihood of an outcome. This helps ensure that, for example, if a model predicts something with 80 percent confidence, it will be correct about 80 percent of the time. Calibrating labels can improve…

Fairness in AI

Fairness in AI refers to the effort to ensure artificial intelligence systems treat everyone equally and avoid discrimination. This means the technology should not favour certain groups or individuals over others based on factors like race, gender, age or background. Achieving fairness involves checking data, algorithms and outcomes to spot and fix any biases that…

Bias Detection Framework

A bias detection framework is a set of tools, methods, and processes designed to identify and measure biases in data, algorithms, or decision-making systems. Its goal is to help ensure that automated systems treat all individuals or groups fairly and do not inadvertently disadvantage anyone. These frameworks often include both quantitative checks, such as statistical…