Your address will show here +12 34 56 78
Deep Learning, Medical Imaging

TCLearn is a scalable method enabling multiple partners to participate together in distributed learning. Each new iteration of the model is first validated through federated byzantine agreement to guarantee the quality of the resulting model before being recorded in a blockchain. TCLearn preserves the privacy of both the data and the model while ensuring its optimal efficiency.

Link: 10.1109/ACCESS.2019.2959220

Full abstract

Distributed learning across coalitions is becoming popular for multi-centric implementation of deep learning models. However, the level of trust between the members of a coalition can vary and requires different security architectures. Privacy of the training data has been largely described in distributed learning. In this paper, we present a scalable security architecture providing additional features such as validation on the sources quality, confidentiality on the model within a trusted coalition or confidentiality among untrusted partners inside the coalition. More specifically, we propose solutions that guarantee preservation not only of data privacy but also of data quality, enforce a trustworthy sequence of iterative learning, and that lead to equitable sharing of the learned model among the coalition’s members. We give an example of its deployment in the case of the distributed optimization of a deep learning convolutional neural network trained on medical images.

Distributed learning across coalitions is becoming popular for multi-centric implementation of deep learning models. However, the level of trust between the members of a coalition can vary and requires different security architectures. Privacy of the training data has been largely described in distributed learning. In this paper, we present a scalable security architecture providing additional features such as validation on the sources quality, confidentiality on the model within a trusted coalition or confidentiality among untrusted partners inside the coalition. More specifically, we propose solutions that guarantee preservation not only of data privacy but also of data quality, enforce a trustworthy sequence of iterative learning, and that lead to equitable sharing of the learned model among the coalition’s members. We give an example of its deployment in the case of the distributed optimization of a deep learning convolutional neural network trained on medical images.

0