Cognitive bias mitigation refers to strategies and techniques used to reduce the impact of automatic thinking errors that can influence decisions and judgements. These biases are mental shortcuts that can lead people to make choices that are not always logical or optimal. By recognising and addressing these biases, individuals and groups can make more accurate…
Category: AI Ethics & Bias
Accessibility in Digital Systems
Accessibility in digital systems means designing websites, apps, and other digital tools so that everyone, including people with disabilities, can use them easily. This involves making sure that content is understandable, navigable, and usable by people who may use assistive technologies like screen readers or voice commands. Good accessibility helps remove barriers and ensures all…
Fairness-Aware Machine Learning
Fairness-Aware Machine Learning refers to developing and using machine learning models that aim to make decisions without favouring or discriminating against individuals or groups based on sensitive characteristics such as gender, race, or age. It involves identifying and reducing biases that can exist in data or algorithms to ensure fair outcomes for everyone affected by…
Heuristic Anchoring Bias in LLMs
Heuristic anchoring bias in large language models (LLMs) refers to the tendency of these models to rely too heavily on the first piece of information they receive when generating responses. This bias can influence the accuracy and relevance of their outputs, especially if the initial prompt or context skews the model’s interpretation. As a result,…
Proxy Alignment Drift
Proxy alignment drift refers to the gradual shift that occurs when a system or agent starts optimising for an indirect goal, known as a proxy, rather than the true intended objective. Over time, the system may become increasingly focused on the proxy, losing alignment with what was originally intended. This issue is common in automated…
Ethical AI
Ethical AI refers to the development and use of artificial intelligence systems in ways that are fair, responsible, and respectful of human rights. It involves creating AI that avoids causing harm, respects privacy, and treats all people equally. The goal is to ensure that the benefits of AI are shared fairly and that negative impacts…
Bias Mitigation
Bias mitigation refers to the methods and strategies used to reduce unfairness or prejudice within data, algorithms, or decision-making processes. It aims to ensure that outcomes are not skewed against particular groups or individuals. By identifying and addressing sources of bias, bias mitigation helps create more equitable and trustworthy systems.
Responsible AI
Responsible AI refers to the practice of designing, developing and using artificial intelligence systems in ways that are ethical, fair and safe. It means making sure AI respects people’s rights, avoids causing harm and works transparently. Responsible AI also involves considering the impact of AI decisions on individuals and society, including issues like bias, privacy…
Knowledge Calibration
Knowledge calibration is the process of matching your confidence in what you know to how accurate your knowledge actually is. It helps you recognise when you are sure about something and when you might be guessing or uncertain. Good calibration means you are neither overconfident nor underconfident about what you know.