Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, IW Tsang, M Sugiyama NeurIPS 2018, 2018 | 311 | 2018 |
How does Disagreement Help Generalization against Label Corruption? X Yu, B Han, J Yao, G Niu, IW Tsang, M Sugiyama ICML 2019, 2019 | 92* | 2019 |
Masking: A New Perspective of Noisy Supervision B Han, J Yao, G Niu, M Zhou, IW Tsang, Z Ya, M Sugiyama NeurIPS 2018, 2018 | 75 | 2018 |
Are Anchor Points Really Indispensable in Label-Noise Learning? X Xia, T Liu, N Wang, B Han, C Gong, G Niu, M Sugiyama NeurIPS 2019, 2019 | 41 | 2019 |
Progressive Stochastic Learning for Noisy Labels B Han, IW Tsang, L Chen, C Yu, SF Fung IEEE Transactions on Neural Networks and Learning Systems, 2017 | 22 | 2017 |
On the Convergence of a Family of Robust Losses for Stochastic Gradient Descent B Han, IW Tsang, L Chen ECML 2016, 2016 | 18 | 2016 |
Fast Image Recognition based on Independent Component Analysis S Zhang, B He, R Nian, J Wang, B Han, A Lendasse, G Yuan Cognitive Computation, 2014 | 17 | 2014 |
Towards Robust ResNet: A Small Step but A Giant Leap J Zhang, B Han, L Wynter, KH Low, M Kankanhalli IJCAI 2019, 2019 | 16 | 2019 |
Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations Q Yao, JT Kwok, B Han ICML 2019, 2019 | 13* | 2019 |
LARSEN: Selective Ensemble Learning using LARS for Blended Data B Han, B He, R Nian, M Ma, S Zhang, M Li, A Lendasse Neurocomputing, 2015 | 13* | 2015 |
Searching to Exploit Memorization Effect in Learning with Noisy Labels Q Yao, H Yang, B Han, G Niu, JT Kwok ICML 2020, 2020 | 12* | 2020 |
SIGUA: Forgetting May Make Learning with Noisy Labels More Robust B Han, G Niu, X Yu, Q Yao, M Xu, IW Tsang, M Sugiyama ICML 2020, 2020 | 11 | 2020 |
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger J Zhang, X Xu, B Han, G Niu, L Cui, M Sugiyama, M Kankanhalli ICML 2020, 2020 | 11 | 2020 |
Robust Plackett–Luce Model for k-ary Crowdsourced Preferences B Han, Y Pan, IW Tsang Machine Learning Journal, 2017 | 11 | 2017 |
Learning from Multiple Complementary Labels L Feng, T Kaneko, B Han, G Niu, B An, M Sugiyama ICML 2020, 2020 | 10 | 2020 |
Confidence scores make instance-dependent label-noise learning possible A Berthon, B Han, G Niu, T Liu, M Sugiyama arXiv preprint arXiv:2001.03772, 2020 | 9 | 2020 |
Butterfly: A Panacea for All Difficulties in Wildly Unsupervised Domain Adaptation F Liu, J Lu, B Han, G Niu, G Zhang, M Sugiyama arXiv preprint arXiv:1905.07720 presented at NeurIPS19 workshop, 2019 | 9 | 2019 |
HSR: L 1/2-regularized Sparse Representation for Fast Face Recognition using Hierarchical Feature Selection B Han, B He, T Sun, T Yan, M Ma, Y Shen, A Lendasse Neural Computing and Applications, 2015 | 9 | 2015 |
Matrix co-completion for multi-label classification with missing features and labels M Xu, G Niu, B Han, IW Tsang, ZH Zhou, M Sugiyama arXiv preprint arXiv:1805.09156, 2018 | 7 | 2018 |
Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning Y Yao, T Liu, B Han, M Gong, J Deng, G Niu, M Sugiyama NeurIPS 2020, 2020 | 6 | 2020 |