Download PDFOpen PDF in browser

Super-Class Mixup for Adjusting Training Data

EasyChair Preprint 7031

14 pagesDate: November 10, 2021

Abstract

Mixup is one of data augmentation methods for image recognition task, which generate data by mixing two images. Mixup randomly samples two images from training data without considering the similarity of these data and classes. This random sampling generates mixed samples with low similarities, which makes a network training difficult and complicated. In this paper, we propose a mixup considering super-class. Super-class is a superordinate categorization of object classes. The proposed method tends to generate mixed samples with the almost same mixing ratio in the case of the same super-class. In contrast, given two images having different super-classes, we generate samples largely containing one image's data. Consequently, a network can train the features between similar object classes. Furthermore, we apply the proposed method into a mutual learning framework, which would improve the network output used for mutual learning. The experimental results demonstrate that the proposed method improves the recognition accuracy on a single model training and mutual training. And, we analyze the attention maps of networks and show that the proposed method also improves the highlighted region and makes a network correctly focuses on the target object.

Keyphrases: Mixup, Super-class, data augmentation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:7031,
  author    = {Shungo Fujii and Naoki Okamoto and Toshiki Seo and Tsubasa Hirakawa and Takayoshi Yamashita and Hironobu Fujiyoshi},
  title     = {Super-Class Mixup for Adjusting Training Data},
  howpublished = {EasyChair Preprint 7031},
  year      = {EasyChair, 2021}}
Download PDFOpen PDF in browser