Category: Regulatory Compliance

Latent Prompt Injection

Latent prompt injection is a security issue affecting artificial intelligence systems that use language models. It occurs when hidden instructions or prompts are placed inside data, such as text or code, which the AI system later processes. These hidden prompts can make the AI system behave in unexpected or potentially harmful ways, without the user…

Dynamic Prompt Tuning

Dynamic prompt tuning is a technique used to improve the responses of artificial intelligence language models by adjusting the instructions or prompts given to them. Instead of using a fixed prompt, the system can automatically modify or optimise the prompt based on context, user feedback, or previous interactions. This helps the AI generate more accurate…