Category: Explainability & Interpretability

Annotator Scores

Annotator scores are numerical ratings or evaluations given by people who label or review data, such as texts, images or videos. These scores reflect the quality, relevance or accuracy of the information being labelled. Collecting annotator scores helps measure agreement between different annotators and improves the reliability of data used in research or machine learning.

Response Labeling

Response labelling is the process of assigning descriptive tags or categories to answers or outputs in a dataset. This helps to organise and identify different types of responses, making it easier to analyse and understand the data. It is commonly used in machine learning, surveys, or customer service systems to classify and manage information efficiently.

Label Calibration

Label calibration is the process of adjusting the confidence scores produced by a machine learning model so they better reflect the true likelihood of an outcome. This helps ensure that, for example, if a model predicts something with 80 percent confidence, it will be correct about 80 percent of the time. Calibrating labels can improve…