Category: Artificial Intelligence

Secure Multi-Party Learning

Secure Multi-Party Learning is a way for different organisations or individuals to train machine learning models together without sharing their raw data. This method uses cryptographic techniques to keep each party’s data private during the learning process. The result is a shared model that benefits from everyone’s data, but no participant can see another’s sensitive…

Encrypted Neural Networks

Encrypted neural networks are artificial intelligence models that process data without ever seeing the raw, unprotected information. They use encryption techniques to keep data secure during both training and prediction, so sensitive information like medical records or financial details stays private. This approach allows organisations to use AI on confidential data without risking exposure or…

Privacy-Preserving Inference

Privacy-preserving inference refers to methods that allow artificial intelligence models to make predictions or analyse data without accessing sensitive personal information in a way that could reveal it. These techniques ensure that the data used for inference remains confidential, even when processed by third-party services or remote servers. This is important for protecting user privacy…

Secure Model Aggregation

Secure model aggregation is a process used in machine learning where updates or results from multiple models or participants are combined without revealing sensitive information. This approach is important in settings like federated learning, where data privacy is crucial. Techniques such as encryption or secure computation ensure that individual contributions remain private during the aggregation…

Neural Network Efficiency

Neural network efficiency refers to how effectively a neural network uses resources such as time, memory, and energy to perform its tasks. Efficient neural networks are designed or optimised to provide accurate results while using as little computation and storage as possible. This is important for running models on devices with limited resources, such as…

Knowledge Amalgamation Models

Knowledge amalgamation models are methods in artificial intelligence that combine knowledge from multiple sources into a single, unified model. These sources can be different machine learning models, datasets, or domains, each with their own strengths and weaknesses. The goal is to merge the useful information from each source, creating a more robust and versatile system…

Neural Network Generalization

Neural network generalisation is the ability of a trained neural network to perform well on new, unseen data, not just the examples it learned from. It means the network has learned the underlying patterns in the data, instead of simply memorising the training examples. Good generalisation is important for making accurate predictions on real-world data…

Domain-Aware Fine-Tuning

Domain-aware fine-tuning is a process where an existing artificial intelligence model is further trained using data that comes from a specific area or field, such as medicine, law, or finance. This makes the model more accurate and helpful when working on tasks or questions related to that particular domain. By focusing on specialised data, the…

Neural Network Sparsification

Neural network sparsification is the process of reducing the number of connections or weights in a neural network while maintaining its ability to make accurate predictions. This is done by removing unnecessary or less important elements within the model, making it smaller and faster to use. The main goal is to make the neural network…

Contrastive Representation Learning

Contrastive representation learning is a machine learning technique that helps computers learn useful features from data by comparing examples. The main idea is to bring similar items closer together and push dissimilar items further apart in the learned representation space. This approach is especially useful when there are few or no labels for the data,…