噪聲學習:漸進式的樣本選擇

No Thumbnail Available

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

在人工智慧蓬勃發展的年代,深度學習技術在不同的影像辨識工作中,都取得不錯的成果,然而這些計算模型的訓練任務往往都是建立在乾淨資料集上做的實驗。然而創建一個乾淨大型資料集往往都需要龐大的標注成本,甚至在一些大型的開源資料集中也有一些人為的標記錯誤出現。為了降低建構資料集的成本以及錯誤標籤對模型的影響,噪聲學習主要研究如何在有標記錯誤的資料集中訓練出穩定可用的模型。在過去的研究中,篩選乾淨樣本的技術,如高斯混合模型或是JS散度技術,都無法準確將所有的乾淨樣本篩選出來。因此,本文從模型預測穩定度的觀點,結合過去相關研究中加入KNN演算法,利用模型預測的穩定度與樣本特徵的相似度進行多階段的篩選。參考近期論文的設計,在雙模型架構設計下,我們發現在訓練前期KNN模型的預測能力比雙模型的預測能力還要差。為了有效利用雙模型的預測結果和KNN模型,我們用模型預測穩定度的指標,漸進式的使用KNN模型,幫助我們過濾出乾淨標籤以及噪聲樣本。實驗結果可以看到我們的方法在不同的噪聲類型、不同的噪聲率下都能有不錯的表現,證明我們方法的有效性。
With the development of technology, deep learning has promising results in different computer vision problems. However, the training tasks of these models are based on clean datasets. Research teams need to pay a lot of efforts to create a clean and large dataset. Even some large open source datasets contain human labeling errors.To reduce the cost of data labeling and the impact of wrong labels, learning with noisy labels mainly studies how to train a stable and usable model in a dataset with incorrect and noisy labels. In the past, techniques such as Gaussian mixture models and JS divergence have been proposed but cannot accurately select all clean samples. From the perspective of model prediction stability, we follow previous papers that use the KNN algorithm to perform a multi-stage selection. In particular, we apply the architecture of two networks, and find that the predictive ability of the KNN model in the early stage of training is worse than that of the two- networks. To effectively use the prediction results of both methods, we consider model prediction stability and gradually use the KNN model to separate the samples into clean and noisy sets. The experimental results on three benchmarks show that our proposed method can outperform state-of-the-art methods under different noise types and different noise rates, proving the effectiveness of our method.

Description

Keywords

深度學習, 圖像分類, 半監督式學習, 噪聲標籤, Deep learning, Image classification, Semi-supervised learning, Noisy label

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By