0

我正在增量学习中体验知识蒸馏。基本上,在每个阶段我都会初始化一个新模型并将其作为学生模型在当前数据上进行训练,并使用旧模型(在前一阶段训练)作为教师。到目前为止,代码中没有错误,但问题是损失在每个阶段之后都不会减少。另外,PL 中有没有办法在 on_train_epoch_start 中初始化新的优化器?

def on_train_epoch_start(self) :
   if self.new_phase:
       self.old_backbone = copy.deepcopy(self.backbone)
       self.old_head = copy.deepcopy(self.head)
       self.backbone = None
       self.head = None
       for p in self.old_backbone.parameters():
           p.requires_grad = False
       for p in self.old_head.parameters():
           p.requires_grad = False
       self.backbone = create_backbone(model_name=self.params.backbone_name,
                                        **self.params.backbone_params)
        # create LINEAR head
       self.params.head_params['in_features'] = self.backbone.num_features
       self.head = HEADS.get(self.params.head_name)(**self.params.head_params)
       self.backbone = self.backbone.to(self.device)
       self.head = self.head.to(self.device)
       self.old_backbone.eval()
       self.old_head.eval()
       self.backbone.train()
       self.head.train()
4

0 回答 0