Multi-Teacher Knowledge Distillation with Reinforcement Learning (MTKD-RL) – A Breakthrough in Visual Recognition

Diagram of Multi-Teacher Knowledge Distillation with Reinforcement Learning (MTKD-RL)

In the rapidly evolving field of deep learning, knowledge distillation (KD) has emerged as a powerful technique for transferring knowledge from large, complex models—referred to as teachers—to smaller, more efficient models known as students. This process enables lightweight neural networks to achieve high accuracy, making them ideal for deployment on edge devices and real-time applications.

While traditional KD relies on a single teacher, multi-teacher knowledge distillation (MTKD) leverages a pool of diverse teacher models to provide richer, more comprehensive knowledge. However, a critical challenge remains: how to optimally balance the influence of each teacher during distillation? Simply assigning equal weights often leads to suboptimal results, especially when teachers vary in performance or when the student lacks the capacity to mimic more powerful models.

To address this, researchers from the Institute of Computing Technology, Chinese Academy of Sciences, have introduced a groundbreaking solution: Multi-Teacher Knowledge Distillation with Reinforcement Learning (MTKD-RL). This innovative framework uses reinforcement learning (RL) to dynamically optimize teacher weights, significantly boosting student performance across image classification, object detection, and semantic segmentation tasks.

In this article, we’ll dive deep into the MTKD-RL methodology, explore its advantages over existing methods, present key experimental results, and discuss its implications for the future of model compression and efficient AI.


What Is Multi-Teacher Knowledge Distillation?

Multi-teacher knowledge distillation (MTKD) extends the classic KD paradigm by allowing a student model to learn from multiple pre-trained teacher networks simultaneously. The core idea is that aggregating knowledge from diverse models—each potentially strong in different aspects—can lead to better generalization and higher accuracy than learning from a single source.

The total loss function in MTKD typically combines three components:

  1. Task loss: Standard cross-entropy loss between student predictions and ground-truth labels.
  2. Logit KD loss: Kullback-Leibler (KL) divergence between student and teacher output distributions.
  3. Feature KD loss: Distance metric (e.g., MSE) between intermediate feature maps of student and teacher.

Mathematically, the multi-teacher KD loss is expressed as:

\[ \mathcal{L}_{\text{MT-KD}} = \underbrace{H(y_i^S, y_i)}_{\text{task loss}} + \alpha \sum_{m=1}^{M} w_{l,i}^m \, D_{\text{KL}}(y_i^S \,\|\, y_i^{T_m}) + \beta \sum_{m=1}^{M} w_{f,i}^m \, D_{\text{dis}}(F_i^S, F_i^{T_m}) \]

Where:

\[ M : \text{Number of teachers} \] \[ w_{l,m}, \; w_{f,m} : \text{Logit- and feature-level weights for teacher } m \] \[ \alpha, \beta : \text{Loss balancing hyperparameters} \]

The key challenge lies in determining the optimal values for wl,im​ and wf,im​ —the teacher weights—on a per-sample basis.


The Problem with Existing MTKD Methods

Most existing MTKD approaches use heuristic or static weighting strategies, such as:

  • Equal weighting (AVER): All teachers contribute equally.
  • Entropy-based weighting (EB-KD): Weights based on teacher prediction confidence.
  • Gradient-based optimization (AE-KD): Balances gradients in multi-objective learning.
  • Meta-learning (MMKD): Learns aggregation weights via meta-optimization.

However, these methods often focus on only one aspect—either teacher performance or teacher-student gap—and fail to consider the dynamic interplay between the student, teachers, and input data. As a result, they may assign high weights to overconfident but incorrect teachers or fail to adapt when the student struggles to mimic a powerful teacher.


Introducing MTKD-RL: Reinforcement Learning to the Rescue

The MTKD-RL framework, proposed in the paper Multi-Teacher Knowledge Distillation with Reinforcement Learning for Visual Recognition , addresses these limitations by introducing an RL-based decision-making agent that dynamically learns to assign optimal teacher weights.

How MTKD-RL Works

MTKD-RL formulates the weight assignment problem as a reinforcement learning task, where:

  • Environment: The multi-teacher KD training process.
  • Agent: A neural network that outputs teacher weights.
  • State: Comprehensive information about teacher performance and teacher-student gaps.
  • Action: The generated teacher weights wim​ .
  • Reward: Based on student performance after distillation.

1. State Representation

The state sim​ for each teacher m and sample xi​ is a concatenated vector containing:

Teacher Performance:

\[ \text{Feature representation: } f_i^{T_m} \] \[ \text{Logit vector: } z_i^{T_m} \] \[ \text{Cross-entropy loss: } L_{CE}^{T_m} \]

Teacher-Student Gaps:

  • Feature similarity (cosine similarity)
  • Logit KL-divergence DKL​(yiS, yiTm​​)
\[ \text{sim} = \big[\, f_i^{T_m} \;\Vert\; z_i^{T_m} \;\Vert\; LCE^{T_m} \;\Vert\; \cos(f_i^{S}, f_i^{T_m}) \;\Vert\; D_{KL}(y_i^{S}, y_i^{T_m}) \,\big] \]

This rich state encoding allows the agent to make informed decisions based on both how good the teacher is and how well the student can learn from it.

2. Action: Dynamic Weight Generation

The agent πθm​​ (sim) takes the state as input and outputs a weight vector wim ∈ (0,1) using a neural network with softmax activation. The action space is continuous, enabling fine-grained control over distillation strength.

Additionally, MTKD-RL incorporates confidence-aware and divergence-aware strategies:

  • Confidence-aware: Rewards teachers with low cross-entropy loss (high confidence on correct class).
  • Divergence-aware: Emphasizes teachers with high KL-divergence, guiding the student to learn from under-mimicked outputs.

The final weight is a fusion of three components:

\[ w_f = \gamma_{\text{gen}} \cdot w_{\text{gen}} + \gamma_{\text{conf}} \cdot w_{\text{conf}} + \gamma_{\text{div}} \cdot w_{\text{div}} \]

3. Reward Function

The reward Rim​ is defined as the negative of the total loss after distillation:

\[ R_{im} = -H(y_i^{S}, y_i) \;-\; \alpha \, D_{\text{KL}}(y_i^{S}, y_i^{Tm}) \;-\; \beta \, D_{\text{dis}}(F_i^{S}, F_i^{Tm}) \]

A higher (less negative) reward indicates better student performance, encouraging the agent to assign higher weights to teachers that lead to improved learning.

4. Agent Optimization via Policy Gradient

The agent is updated using Policy Gradient (PG):

\[ \theta_m \;\leftarrow\; \theta_m – \eta \sum_{i=1}^{B} \bar{R}_{i}^{m} \nabla_{\theta_m} \log \pi_{\theta_m}(s_{i}^{m}) \]

Where Rim is the normalized reward across all teachers in the batch.


Key Advantages of MTKD-RL

FEATUREMTKD-RLTRADITIONAL MTKD
Weighting StrategyDynamic, sample-wiseStatic or heuristic
Information UsedTeacher performance + student gapsOne-sided (e.g., entropy only)
AdaptabilityHigh (RL-based)Low
PerformanceSOTA across tasksSuboptimal
Framework FlexibilityWorks with CNNs and ViTsOften architecture-specific

MTKD-RL stands out by reinforcing the interaction between student and teachers, leading to more meaningful, context-aware distillation.


Experimental Results: State-of-the-Art Performance

The authors evaluated MTKD-RL on multiple visual recognition tasks. Here are the highlights:

📊 Image Classification on CIFAR-100

STUDENT MODELBASELINEAVER (KD+FITNET)MMKDMTKD-RL
RegNetX-200MF77.3879.1280.1580.58
MobileNetV269.1772.6774.3574.63
ShuffleNetV272.8476.8377.8778.39
ResNet-5672.5273.9375.2675.35

MTKD-RL achieves an average gain of 0.31–0.33% over SOTA methods like MMKD and CA-MKD.

🖼️ ImageNet Classification

STUDENTBASELINEAVER (KD+FITNET)MMKDMTKD-RL
ResNet-1870.3571.5672.3372.82
ResNet-3473.6475.5576.0676.77
DeiT-Tiny72.2373.9874.3575.14

MTKD-RL outperforms MMKD by 0.49–0.71%, proving its scalability to large datasets and ViT architectures.

🔍 Object Detection on COCO-2017

Using ImageNet-pretrained backbones:

BACKBONEDETECTORBASELINE MAPMTKD-RL MAPGAIN
ResNet-18Mask R-CNN34.135.1+1.0
ResNet-34RetinaNet35.236.9+1.7

Average mAP gain: +1.1% (ResNet-18), +1.5% (ResNet-34)

🌆 Semantic Segmentation

DATASETBASELINEMTKD-RLGAIN
Cityscapes76.3477.42+1.08
ADE20K36.0837.07+0.99
COCO-Stuff29.9731.82+1.85

MTKD-RL consistently improves pixel-level understanding, showing its effectiveness in dense prediction tasks.


Ablation Studies: What Makes MTKD-RL Work?

🔬 Component Analysis (ShuffleNetV2, CIFAR-100)

CONFIGURATIONACCURACY (%)
AVER (baseline)76.83
+ Teacher Performance77.84
+ Teacher-Student Gaps77.46
Full MTKD-RL (both)78.39

Conclusion: Combining both aspects yields the best results.

⚙️ RL Algorithm Comparison

RL METHODACCURACY (%)
PG (Ours)78.39
DPG78.16
DDPG78.05
PPO78.28

MTKD-RL is not sensitive to the choice of RL algorithm, making PG a simple and effective choice.

💰 Training Cost Analysis

METHODTIME (PER EPOCH)MEMORYACCURACY (%)
Baseline29s2.3G72.84
AVER41s2.8G76.83
CA-MKD54s2.9G78.09
MTKD-RL47s3.2G78.39

Despite higher memory usage, MTKD-RL is 13% faster than CA-MKD and delivers 0.3% higher accuracy.


Why MTKD-RL Matters for Real-World AI

MTKD-RL is not just an academic advancement—it has real-world implications:

  • Better Edge AI: Enables high-accuracy models on mobile and IoT devices.
  • Efficient Training: Reduces reliance on large models by leveraging ensemble knowledge.
  • Cross-Task Generalization: Improves performance in detection and segmentation, not just classification.
  • Scalable Framework: Works with both CNNs and Vision Transformers (ViTs).

By using reinforcement learning to optimize teacher weights, MTKD-RL moves beyond heuristic rules and enables truly adaptive, intelligent distillation.


How to Implement MTKD-RL

The authors have open-sourced their code on GitHub:

👉 MTKD-RL GitHub Repository

The implementation includes:

  • Pre-trained teacher pools
  • Agent architecture and training loop
  • Support for CIFAR-100, ImageNet, COCO, and Cityscapes
  • Integration with PyTorch and MMDetection

You can easily adapt it for your own multi-teacher distillation pipeline.


Future Directions

MTKD-RL opens the door to several exciting research avenues:

  • Learnable fusion coefficients (γ ) via meta-learning.
  • Multi-agent RL for decentralized teacher coordination.
  • Cross-modal MTKD (e.g., image + text teachers for vision models).
  • Online MTKD-RL for streaming data environments.

As RL and KD continue to converge, we can expect even more intelligent, self-optimizing distillation systems.


Conclusion: The Future of Knowledge Distillation Is Adaptive

Multi-Teacher Knowledge Distillation with Reinforcement Learning (MTKD-RL) represents a major leap forward in model compression. By using an RL agent to dynamically assign teacher weights based on both teacher quality and student compatibility, MTKD-RL achieves state-of-the-art performance across image classification, object detection, and segmentation.

Its key innovations—comprehensive state encoding, policy gradient optimization, and adaptive weight fusion—make it a robust and scalable solution for modern visual recognition tasks.

Whether you’re building lightweight mobile apps, optimizing cloud inference, or advancing AI research, MTKD-RL offers a powerful tool to boost accuracy without increasing model size.


🔔 Call to Action

🚀 Ready to supercharge your student models?

  • Clone the repo: github.com/winycg/MTKD-RL
  • Run experiments on your dataset
  • Share your results and tag the authors!
  • Subscribe for more AI breakthroughs in model compression and efficient deep learning.

Let’s make AI smarter, faster, and more efficient—one distilled model at a time. 💡

Here is the Python file that implements the entire framework using PyTorch. The code is structured to be self-contained and runnable, including definitions for the models, the RL Agent, and the full training procedure as described in Algorithms 1 and 2 of the paper.

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
from torch.utils.data import DataLoader, Dataset
import numpy as np

# --- Configuration & Hyperparameters ---
# These values are based on the paper or are set for demonstration purposes.
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
NUM_CLASSES = 100  # For CIFAR-100 simulation
INPUT_CHANNELS = 3
IMG_SIZE = 32
BATCH_SIZE = 64
LEARNING_RATE_STUDENT = 0.05
LEARNING_RATE_AGENT = 0.001
EPOCHS = 10 # A smaller number for a quick demo run; paper uses 240
ALPHA = 1.0  # Weight for logit KD loss
BETA = 5.0   # Weight for feature KD loss

# --- 1. Model Definitions ---
# A simple CNN to serve as both teacher and student networks.
# The model is designed to easily extract both features and logits.

class SimpleCNN(nn.Module):
    def __init__(self, in_channels, num_classes, feature_dim=256):
        super(SimpleCNN, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(in_channels, 32, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(32, 64, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(64, 128, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
        )
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.feature_proj = nn.Linear(128, feature_dim) # Project features to a fixed dimension
        self.classifier = nn.Linear(feature_dim, num_classes)

    def forward(self, x):
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        features = self.feature_proj(x)
        logits = self.classifier(features)
        return features, logits

# --- 2. RL Agent Definition ---
# The agent model that takes the state and outputs weights (action).

class Agent(nn.Module):
    def __init__(self, state_dim, num_teachers):
        super(Agent, self).__init__()
        self.num_teachers = num_teachers
        self.layer1 = nn.Linear(state_dim, 128)
        self.layer2 = nn.Linear(128, 128)
        # As per Appendix A.1, two separate heads for feature and logit weights
        self.feature_head = nn.Linear(128, num_teachers)
        self.logit_head = nn.Linear(128, num_teachers)

    def forward(self, state):
        x = F.relu(self.layer1(state))
        x = F.relu(self.layer2(x))
        
        # Generator weights (w_gen from Appendix A.1)
        w_f_gen = F.softmax(self.feature_head(x), dim=-1)
        w_l_gen = F.softmax(self.logit_head(x), dim=-1)
        
        return w_f_gen, w_l_gen

# --- 3. Loss Functions ---

def logit_kd_loss(student_logits, teacher_logits, temperature=4.0):
    """ Standard Knowledge Distillation loss based on KL-Divergence. """
    soft_teacher_logits = F.softmax(teacher_logits / temperature, dim=1)
    soft_student_logits = F.log_softmax(student_logits / temperature, dim=1)
    return F.kl_div(soft_student_logits, soft_teacher_logits, reduction='batchmean') * (temperature**2)

def feature_kd_loss(student_features, teacher_features):
    """ FitNets-style feature KD loss (MSE). """
    return F.mse_loss(student_features, teacher_features)

# --- 4. Main MTKD-RL Trainer Class ---

class MTKDRL_Trainer:
    def __init__(self, student, teachers, agent, train_loader):
        self.student = student.to(DEVICE)
        self.teachers = [t.to(DEVICE) for t in teachers]
        self.agent = agent.to(DEVICE)
        self.train_loader = train_loader
        self.num_teachers = len(teachers)

        self.optimizer_student = optim.SGD(self.student.parameters(), lr=LEARNING_RATE_STUDENT, momentum=0.9, weight_decay=1e-4)
        self.optimizer_agent = optim.Adam(self.agent.parameters(), lr=LEARNING_RATE_AGENT)
        
        self.ce_loss = nn.CrossEntropyLoss()
        
        # Linear regressors to match student and teacher feature dimensions if they differ
        student_feature_dim = self.student.classifier.in_features
        self.regressors = nn.ModuleList([
            nn.Linear(student_feature_dim, t.classifier.in_features) for t in self.teachers
        ]).to(DEVICE)

    def _construct_state(self, student_feats, student_logits, teacher_infos, labels):
        """ Implements state construction as per Equation 5. """
        states = []
        for i in range(self.num_teachers):
            teacher_feats = teacher_infos[i]['features']
            teacher_logits = teacher_infos[i]['logits']
            
            # (1) Teacher feature representation
            f_tm = teacher_feats
            
            # (2) Teacher logit vector
            z_tm = teacher_logits
            
            # (3) Teacher cross-entropy loss
            l_ce_tm = self.ce_loss(teacher_logits, labels).unsqueeze(0)
            
            # (4) Teacher-student feature similarity (cosine similarity)
            student_feats_regressed = self.regressors[i](student_feats)
            cos_sim = F.cosine_similarity(student_feats_regressed, teacher_feats).unsqueeze(1)
            
            # (5) Teacher-student probability KL-divergence
            kl_div = logit_kd_loss(student_logits, teacher_logits)
            kl_tm = torch.mean(kl_div, dim=0, keepdim=True).unsqueeze(1)
            
            # Concatenate all parts to form the state for one teacher
            # Note: Ensure all parts are 2D tensors for concatenation
            state_tm = torch.cat([
                f_tm.detach(), 
                z_tm.detach(), 
                l_ce_tm.detach().unsqueeze(1).expand(-1, 1),
                cos_sim.detach(), 
                kl_tm.detach()
            ], dim=1)
            states.append(state_tm)
            
        # Concatenate states from all teachers for batch processing by the agent
        return torch.stack(states, dim=1) # Shape: [batch, num_teachers, state_dim]

    def _calculate_action(self, state, teacher_infos, student_feats, student_logits):
        """ Implements action construction from Appendix A.1. """
        batch_size = state.shape[0]
        state_reshaped = state.view(batch_size * self.num_teachers, -1)
        w_f_gen_flat, w_l_gen_flat = self.agent(state_reshaped)
        w_f_gen = w_f_gen_flat.view(batch_size, self.num_teachers)
        w_l_gen = w_l_gen_flat.view(batch_size, self.num_teachers)
        
        # Confidence-aware weights (w_conf) - Equation 9
        teacher_ce_losses = torch.stack([self.ce_loss(t['logits'], teacher_infos[0]['labels']) for t in teacher_infos], dim=1)
        exp_ce = torch.exp(teacher_ce_losses)
        w_conf = (1.0 / (self.num_teachers - 1)) * (1.0 - (exp_ce / torch.sum(exp_ce, dim=1, keepdim=True)))
        
        # Divergence-aware weights (w_div) - Equations 11 & 12
        cos_sims = []
        kl_divs = []
        for i in range(self.num_teachers):
            s_feats_regressed = self.regressors[i](student_feats)
            cos_sims.append(F.cosine_similarity(s_feats_regressed, teacher_infos[i]['features']))
            kl_divs.append(torch.mean(logit_kd_loss(student_logits, teacher_infos[i]['logits'], reduction='none'), dim=1))
        
        w_f_div = F.softmax(torch.stack(cos_sims, dim=1), dim=1)
        w_l_div = F.softmax(torch.stack(kl_divs, dim=1), dim=1)

        # Final weighted fusion - Equations 13 & 14 (with equal balancing)
        gamma = 1.0 / 3.0
        w_f = gamma * w_f_gen + gamma * w_conf + gamma * w_f_div
        w_l = gamma * w_l_gen + gamma * w_conf + gamma * w_l_div
        
        return w_f, w_l

    def _calculate_reward(self, student_feats, student_logits, teacher_infos, labels):
        """ Implements reward calculation as per Equation 6. """
        rewards = []
        base_loss = self.ce_loss(student_logits, labels)
        for i in range(self.num_teachers):
            t_feats = teacher_infos[i]['features']
            t_logits = teacher_infos[i]['logits']
            
            reward_m = -base_loss \
                       - ALPHA * logit_kd_loss(student_logits, t_logits) \
                       - BETA * feature_kd_loss(self.regressors[i](student_feats), t_feats)
            rewards.append(reward_m)
        return torch.stack(rewards, dim=1) # Shape: [batch_size, num_teachers]

    def _normalize_reward(self, rewards):
        """ Normalizes reward as per Equation 8. """
        min_r, _ = torch.min(rewards, dim=1, keepdim=True)
        max_r, _ = torch.max(rewards, dim=1, keepdim=True)
        # Add a small epsilon to avoid division by zero if all rewards are the same
        normalized_r = (rewards - min_r) / (max_r - min_r + 1e-8)
        
        mean_r = torch.mean(rewards, dim=1, keepdim=True)
        final_rewards = normalized_r - mean_r
        return final_rewards

    def _train_epoch(self, is_pretrain=False):
        self.student.train()
        self.agent.train()
        total_student_loss = 0
        episode_history = []

        for batch_idx, (data, labels) in enumerate(self.train_loader):
            data, labels = data.to(DEVICE), labels.to(DEVICE)
            
            # --- Student Training Step ---
            self.optimizer_student.zero_grad()
            
            # Get teacher outputs
            teacher_infos = []
            with torch.no_grad():
                for teacher in self.teachers:
                    t_feats, t_logits = teacher(data)
                    teacher_infos.append({'features': t_feats, 'logits': t_logits, 'labels': labels})

            # Get student outputs
            student_feats, student_logits = self.student(data)
            
            if is_pretrain:
                w_f = torch.ones(data.size(0), self.num_teachers).to(DEVICE)
                w_l = torch.ones(data.size(0), self.num_teachers).to(DEVICE)
            else:
                # Construct state and get action from agent
                state = self._construct_state(student_feats, student_logits, teacher_infos, labels)
                w_f, w_l = self._calculate_action(state, teacher_infos, student_feats, student_logits)
            
            # Calculate MTKD loss (Equation 2)
            loss_ce = self.ce_loss(student_logits, labels)
            loss_kl_weighted = sum(w_l[:, i] * logit_kd_loss(student_logits, teacher_infos[i]['logits']) for i in range(self.num_teachers))
            loss_feat_weighted = sum(w_f[:, i] * feature_kd_loss(self.regressors[i](student_feats), teacher_infos[i]['features']) for i in range(self.num_teachers))
            
            # The paper's Equation 2 sums weights over samples. Here we average over the batch.
            total_loss = loss_ce + ALPHA * loss_kl_weighted.mean() + BETA * loss_feat_weighted.mean()
            
            total_loss.backward()
            self.optimizer_student.step()
            total_student_loss += total_loss.item()
            
            # --- Prepare for Agent Update ---
            if not is_pretrain:
                # Calculate and store (state, action, reward) for agent update
                reward = self._calculate_reward(student_feats.detach(), student_logits.detach(), teacher_infos, labels)
                episode_history.append({'state': state.detach(), 'actions': (w_f.detach(), w_l.detach()), 'rewards': reward.detach()})

        # --- Agent Training Step (after processing all batches in an epoch) ---
        if not is_pretrain:
            self.optimizer_agent.zero_grad()
            total_agent_loss = 0
            
            for episode in episode_history:
                state, (w_f, w_l), rewards = episode['state'], episode['actions'], episode['rewards']
                normalized_rewards = self._normalize_reward(rewards)
                
                # Get log probabilities of the taken actions
                state_reshaped = state.view(-1, state.shape[-1])
                w_f_gen, w_l_gen = self.agent(state_reshaped)
                w_f_gen = w_f_gen.view(state.shape[0], self.num_teachers)
                w_l_gen = w_l_gen.view(state.shape[0], self.num_teachers)

                # Policy Gradient Loss (Equation 7)
                # We use the generated part of the action for the PG loss
                log_prob_f = torch.log(w_f_gen + 1e-8)
                log_prob_l = torch.log(w_l_gen + 1e-8)
                
                # The reward is the same for both feature and logit weights
                # We average the loss from both heads
                agent_loss_f = -torch.mean(log_prob_f * normalized_rewards)
                agent_loss_l = -torch.mean(log_prob_l * normalized_rewards)
                agent_loss = (agent_loss_f + agent_loss_l) / 2
                
                agent_loss.backward()
                total_agent_loss += agent_loss.item()
            
            self.optimizer_agent.step()
            print(f"Avg Agent Loss: {total_agent_loss / len(episode_history):.4f}")

        return total_student_loss / len(self.train_loader)

    def train(self):
        """ Implements the overall training procedure from Algorithm 1. """
        print("--- Starting Algorithm 1: Overall MTKD-RL Training ---")
        
        # Step 1: Pre-train the student network with equal weights
        print("\n--- [Step 1] Pre-training Student with Equal Weights (1 Epoch) ---")
        pretrain_student_loss = self._train_epoch(is_pretrain=True)
        print(f"Pre-train Student Loss: {pretrain_student_loss:.4f}")
        
        # Step 2: Pre-train the agent (not explicitly detailed, but good practice)
        # We can run one epoch of agent training on the experience from student pre-training.
        # This part is simplified here. The main loop will handle agent training.
        print("\n--- [Step 2] Agent pre-training is implicitly handled by the main loop's first step. ---")
        
        # Step 3: Alternative Multi-Teacher KD and Agent Optimization
        print("\n--- [Step 3] Starting Alternative Optimization (Algorithm 2) ---")
        for epoch in range(EPOCHS):
            print(f"\n--- Epoch {epoch + 1}/{EPOCHS} ---")
            avg_student_loss = self._train_epoch(is_pretrain=False)
            print(f"Epoch {epoch + 1} - Avg Student Loss: {avg_student_loss:.4f}")
            
        print("\n--- Training Finished ---")

# --- 5. Main Execution ---

if __name__ == '__main__':
    # Use FakeData for a runnable example without downloading datasets
    class FakeCIFAR100(Dataset):
        def __init__(self, num_samples=1000, num_classes=100, img_size=32, channels=3):
            self.num_samples = num_samples
            self.num_classes = num_classes
            self.img_size = img_size
            self.channels = channels
        
        def __len__(self):
            return self.num_samples
        
        def __getitem__(self, idx):
            # Generate a random image and a random label
            img = torch.randn(self.channels, self.img_size, self.img_size)
            label = torch.randint(0, self.num_classes, (1,)).item()
            return img, label

    print("Setting up models and data loader...")
    train_dataset = FakeCIFAR100(num_samples=512) # smaller dataset for demo
    train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
    
    # Instantiate models
    # Teachers are assumed to be pre-trained (we just initialize them here)
    teacher1 = SimpleCNN(INPUT_CHANNELS, NUM_CLASSES)
    teacher2 = SimpleCNN(INPUT_CHANNELS, NUM_CLASSES)
    teacher3 = SimpleCNN(INPUT_CHANNELS, NUM_CLASSES)
    teachers = [teacher1, teacher2, teacher3]
    
    student = SimpleCNN(INPUT_CHANNELS, NUM_CLASSES, feature_dim=128) # Smaller student

    # Determine state dimension for the agent
    # state = [f_tm, z_tm, l_ce_tm, cos_sim, kl_tm]
    # Note: Dimensions must match. Here, we assume teacher feature dims are the same.
    t_feat_dim = teachers[0].classifier.in_features
    state_dim = t_feat_dim + NUM_CLASSES + 1 + 1 + 1 
    
    agent = Agent(state_dim=state_dim, num_teachers=len(teachers))
    
    print(f"Device: {DEVICE}")
    print(f"Number of teachers: {len(teachers)}")
    print(f"Student feature dimension: {student.classifier.in_features}")
    print(f"Teacher feature dimension: {t_feat_dim}")
    print(f"Agent state dimension: {state_dim}")

    # Initialize and run the trainer
    trainer = MTKDRL_Trainer(student, teachers, agent, train_loader)
    trainer.train()

Related posts, You May like to read

  1. 7 Shocking Truths About Knowledge Distillation: The Good, The Bad, and The Breakthrough (SAKD)
  2. 7 Revolutionary Breakthroughs in Medical Image Translation (And 1 Fatal Flaw That Could Derail Your AI Model)
  3. DeepSPV: Revolutionizing 3D Spleen Volume Estimation from 2D Ultrasound with AI
  4. ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation
  5. GeoSAM2 3D Part Segmentation — Prompt-Controllable, Geometry-Aware Masks for Precision 3D Editing
  6. Probabilistic Smooth Attention for Deep Multiple Instance Learning in Medical Imaging
  7. A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models
  8. Towards Trustworthy Breast Tumor Segmentation in Ultrasound Using AI Uncertainty

Leave a Comment

Your email address will not be published. Required fields are marked *

Follow by Email
Tiktok