Category: Embeddings & Representations

Syntax Coherence

Syntax coherence refers to the logical and consistent arrangement of words and phrases within sentences, so that the meaning is clear and easy to follow. It ensures that the structure of sentences supports the intended message, making communication more effective. Without syntax coherence, writing can become confusing or ambiguous, making it harder for the reader…

Named Recognition

Named recognition refers to the process of identifying and classifying proper names, such as people, organisations, or places, within a body of text. This task is often handled by computer systems that scan documents to pick out and categorise these names. It is a foundational technique in natural language processing used to make sense of…

Dialogue Memory

Dialogue memory is a system or method that allows a programme, such as a chatbot or virtual assistant, to remember and refer back to previous exchanges in a conversation. This helps the software understand context, track topics, and respond more naturally to users. With dialogue memory, interactions feel more coherent and less repetitive, as the…

Latent Injection

Latent injection is a technique used in artificial intelligence and machine learning where information is added or modified within the hidden, or ‘latent’, layers of a model. These layers represent internal features that the model has learned, which are not directly visible to users. By injecting new data or signals at this stage, developers can…

Embedding Injection

Embedding injection is a security vulnerability that occurs when untrusted input is inserted into a system that uses vector embeddings, such as those used in natural language processing or search. Attackers can exploit this by crafting inputs that manipulate or poison the embedding space, causing systems to retrieve incorrect or harmful results. This can lead…

Neural Collapse

Neural collapse is a phenomenon observed in deep neural networks during the final stages of training, particularly for classification tasks. It describes how the outputs or features for each class become highly clustered and the final layer weights align with these clusters. This leads to a simplified geometric structure where class features and decision boundaries…