Category: Deep Learning

Conditional Generative Models

Conditional generative models are a type of artificial intelligence that creates new data based on specific input conditions or labels. Instead of generating random outputs, these models use extra information to guide what they produce. This allows for more control over the type of data generated, such as producing images of a certain category or…

Equivariant Neural Networks

Equivariant neural networks are a type of artificial neural network designed so that their outputs change predictably when the inputs are transformed. For example, if you rotate or flip an image, the network’s response changes in a consistent way that matches the transformation. This approach helps the network recognise patterns or features regardless of their…

Neural Process Models

Neural process models are computational systems that use neural networks to learn functions or processes from data. Unlike traditional neural networks that focus on mapping inputs to outputs, neural process models aim to understand entire functions, allowing them to adapt quickly to new tasks with limited data. These models are especially useful for problems where…

Invertible Neural Networks

Invertible neural networks are a type of artificial neural network designed so that their operations can be reversed. This means that, given the output, you can uniquely determine the input that produced it. Unlike traditional neural networks, which often lose information as data passes through layers, invertible neural networks preserve all information, making them especially…

Neural Posterior Estimation

Neural Posterior Estimation is a machine learning technique that uses neural networks to approximate the probability of different causes or parameters given observed data. This approach is useful when traditional mathematical methods are too slow or complex to calculate these probabilities. By learning from examples, neural networks can quickly estimate how likely certain parameters are,…

Neural Tangent Generalisation

Neural Tangent Generalisation refers to understanding how large neural networks learn and make predictions by using a mathematical tool called the Neural Tangent Kernel (NTK). This approach simplifies complex neural networks by treating them like linear models when they are very wide, making their behaviour easier to analyse. Researchers use this to predict how well…