This repository implements a pipeline for introducing a bottleneck to an existing Deep Neural Network so that it can be easily split for distributed deployment (inspired by the work in Head Network Distillation). Jointly, the pipeline trains a local smaller classifier that can perform early prediction on the bottleneck output. Bottleneck and early classifier are trained jointly by means of a regularization process that leads to producing high-quality embeddings as the bottleneck output, hence helping the early classification task. Multiple early classifiers are supported: Logistic, KNN, K-means, and a novel Gaussian Mixture Layer developed in PyTorch. The pipeline can be further extended with additional classifiers implementing the desired modules under src/early_classifier.
forked from yoshitomo-matsubara/head-network-distillation
-
Notifications
You must be signed in to change notification settings - Fork 0
gabrielecastellano/early-exit-distillation
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Early Classification in Split Computing with Regularized Bottlenecks
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Jupyter Notebook 65.2%
- Python 34.8%