The AiEdge+: The Future of Privacy Preserving Machine Learning
Data privacy is always a concern when Machine Learning is involved. There are many techniques for Privacy-Preserving Machine Learning, but today we focus on Federated Learning and machine learning with Full Homomorphic Encryption.
Preserving privacy with Federated learning
One of the main factors for AI having difficulty making its way into the healthcare and banking industry is the requirement for data privacy. For example, if you build a healthcare startup, you are going to have a hard time convincing hospitals to lend you their data to train your models. There is so much regulation on those data that the benefits are not worth the risk!
One way to solve that is Federated Learning! The idea is that instead of bringing the data to the model, we bring the model to the data. That is the way Google trains its query suggestions on Android for example. It is also an important component of how self-driving cars continuously train their ML applications. It can be summarized in the following steps:
A model is pre-trained on a centralized server and sent to user devices along with the related software applications (Gboard in the case of the query suggestion model).
The users independently interact with the local models that continue to be fine-tuned locally.
The models or the aggregated gradients are sent back to the centralized platform after a certain amount of time where they get averaged into one model.
Keep reading with a 7-day free trial
Subscribe to The AiEdge Newsletter to keep reading this post and get 7 days of free access to the full post archives.