PaddleOCR-EAST

目录

  • EAST
  • Abstract
  • Train
    • PreProcess
    • Architecture
      • Backbone
      • Neck
      • Head
    • Loss
      • Dice Loss
      • SmoothL1 Loss
  • Infer
    • PostProcess
EAST
写在前面:基于PaddleOCR代码库对其中所涉及到的算法进行代码简读,如果有必要可能会先研读一下原论文 。
Abstract
  • 论文链接:arxiv
  • 应用场景:文本检测
  • 代码配置文件:configs/det/det_r50_vd_east.yml
TrainPreProcessclass EASTProcessTrain(object):def __init__(self,image_shape=[512, 512],background_ratio=0.125,min_crop_side_ratio=0.1,min_text_size=10,**kwargs):self.input_size = image_shape[1]self.random_scale = np.array([0.5, 1, 2.0, 3.0])self.background_ratio = background_ratioself.min_crop_side_ratio = min_crop_side_ratioself.min_text_size = min_text_size...def __call__(self, data):im = data['image']text_polys = data['polys']text_tags = data['ignore_tags']if im is None:return Noneif text_polys.shape[0] == 0:return None#add rotate casesif np.random.rand() < 0.5:# 旋转图片和文本框(90,180 , 270)im, text_polys = self.rotate_im_poly(im, text_polys)h, w, _ = im.shape# 限制文本框坐标到有效范围内、检查文本框的有效性(基于文本框的面积)、以及点的顺序是否是顺时针text_polys, text_tags = self.check_and_validate_polys(text_polys,text_tags, h, w)if text_polys.shape[0] == 0:return None# 随机缩放图片以及文本框rd_scale = np.random.choice(self.random_scale)im = cv2.resize(im, dsize=None, fx=rd_scale, fy=rd_scale)text_polys *= rd_scaleif np.random.rand() < self.background_ratio:# 只切纯背景图,如果有文本框会返回Noneouts = self.crop_background_infor(im, text_polys, text_tags)else:"""随机切图并以及crop图所包含的文本框,并基于缩小的文本框生成了label map:- label_map: shape=[h,w],得分图,有文本的地方是1,其余地方为0- geo_map: shape=[h,w,9] 。前8个通道为缩小文本框内的像素到真实文本框的水平以及垂直距离 , 最后一个通道用来做loss归一化 , 其值为每个框最短边长的倒数- training_mask: shape=[h,w],使无效文本框不参与训练,有效的地方为1,无效的地方为0"""outs = self.crop_foreground_infor(im, text_polys, text_tags)if outs is None:return Noneim, score_map, geo_map, training_mask = outs# 产生最终降采样的score map,shape=[1,h//4,w//4]score_map = score_map[np.newaxis, ::4, ::4].astype(np.float32)# 产生最终降采样的gep map, shape=[9,h//4,w//4]geo_map = np.swapaxes(geo_map, 1, 2)geo_map = np.swapaxes(geo_map, 1, 0)geo_map = geo_map[:, ::4, ::4].astype(np.float32)# 产生最终降采样的training mask,shape=[1,h//4,w//4]training_mask = training_mask[np.newaxis, ::4, ::4]training_mask = training_mask.astype(np.float32)data['image'] = im[0]data['score_map'] = score_mapdata['geo_map'] = geo_mapdata['training_mask'] = training_maskreturn dataArchitectureBackbone采用resnet50_vd,得到1/4、1/8、1/16以及1/32倍共计4张降采样特征图 。
Neck基于Unect decoder架构,完成自底向上的特征融合过程,从1/32特征图逐步融合到1/4的特征图,最终得到一张带有多尺度信息的1/4特征图 。
def forward(self, x):# x是存储4张从backbone获取的特征图f = x[::-1]# 此时特征图从小到大排列h = f[0]# [b,512,h/32,w/32]g = self.g0_deconv(h)# [b,128,h/16,w/16]h = paddle.concat([g, f[1]], axis=1)# [b,128+256,h/16,w/16]h = self.h1_conv(h)# [b,128,h/16,w/16]g = self.g1_deconv(h)# [b,128,h/8,w/8]h = paddle.concat([g, f[2]], axis=1)# [b,128+128,h/8,w/8]h = self.h2_conv(h)# [b,128,h/8,w/8]g = self.g2_deconv(h)# [b,128,h/4,w/4]h = paddle.concat([g, f[3]], axis=1)# [b,128+64,h/4,w/4]h = self.h3_conv(h)# [b,128,h/4,w/4]g = self.g3_conv(h)# [b,128,h/4,w/4]return gHead输出分类头和回归头(quad),部分参数共享 。
def forward(self, x, targets=None):# x是融合后的1/4特征图,det_conv1和det_conv2用于进一步加强特征抽取f_det = self.det_conv1(x)# [b,128,h/4,w/4]f_det = self.det_conv2(f_det)# [b,64,h/4,w/4]# # [b,1,h/4,w/4] 用于前、背景分类,注意kernel_size=1f_score = self.score_conv(f_det)f_score = F.sigmoid(f_score)# 获取相应得分# # [b,8,h/4,w/4] , 8的意义:dx1,dy1,dx2,dy2,dx3,dy3,dx4,dy4f_geo = self.geo_conv(f_det)# 回归的range变为:[-800,800],那么最终获取的文本框的最大边长不会超过1600f_geo = (F.sigmoid(f_geo) - 0.5) * 2 * 800pred = {'f_score': f_score, 'f_geo': f_geo}return predLoss分类采用dice_loss,回归采用smooth_l1_loss 。
class EASTLoss(nn.Layer):def __init__(self,eps=1e-6,**kwargs):super(EASTLoss, self).__init__()self.dice_loss = DiceLoss(eps=eps)def forward(self, predicts, labels):"""Params:predicts: {'f_score': 前景得分图,'f_geo': 回归图}labels: [imgs, l_score, l_geo, l_mask]"""l_score, l_geo, l_mask = labels[1:]f_score = predicts['f_score']f_geo = predicts['f_geo']# 分类lossdice_loss = self.dice_loss(f_score, l_score, l_mask)channels = 8# channels+1的原因是最后一个图对应了短边的归一化系数(后面会讲),前8个代表相对偏移的label# [[b,1,h/4,w/4], ...]共9个l_geo_split = paddle.split(l_geo, num_or_sections=channels + 1, axis=1)# [[b,1,h/4,w/4], ...]共8个f_geo_split = paddle.split(f_geo, num_or_sections=channels, axis=1)smooth_l1 = 0for i in range(0, channels):geo_diff = l_geo_split[i] - f_geo_split[i]# diff=label-predabs_geo_diff = paddle.abs(geo_diff)# abs_diff# 计算abs_diff中小于1的且有文本的部分smooth_l1_sign = paddle.less_than(abs_geo_diff, l_score)smooth_l1_sign = paddle.cast(smooth_l1_sign, dtype='float32')# smoothl1 loss,大于1和小于1的两个部分对应loss相加,只不过这里<1的部分没乘0.5 , 问题不大in_loss = abs_geo_diff * abs_geo_diff * smooth_l1_sign + \(abs_geo_diff - 0.5) * (1.0 - smooth_l1_sign)# 用短边*8做归一化out_loss = l_geo_split[-1] / channels * in_loss * l_scoresmooth_l1 += out_loss# paddle.mean(smooth_l1)就可以了 , 前面都乘过了l_score,这里再乘没卵用smooth_l1_loss = paddle.mean(smooth_l1 * l_score)# dice_loss权重为0.01,smooth_l1_loss权重为1dice_loss = dice_loss * 0.01total_loss = dice_loss + smooth_l1_losslosses = {"loss":total_loss, \"dice_loss":dice_loss,\"smooth_l1_loss":smooth_l1_loss}return losses

推荐阅读