旧版    ENGLISH

其他

当前位置 : 首页 > 其他 > 学术活动

数据科学| Stochastic Optimization for AUC Maximization in Machine Learning

编辑:wfy 时间:2019年12月25日 访问次数:422

题目:Stochastic Optimization for AUC Maximization in Machine Learning

时间:12月26日 16:00-17:00

地点:工商楼200-9

报告人:Yiming Ying教授(纽约州立大学奥尔巴尼分校)

摘要:

Stochastic optimization algorithms such as stochastic gradient descent (SGD) update the model sequentially with cheap per-iteration costs, making them amenable for large-scale streaming data analysis. However, most of the existing studies focus on the classification accuracy which  can not be directly applied to the important problems of maximizing the Area under the ROC curve (AUC) in imbalanced classification and bipartite ranking.
In this talk, I will present our recent work on developing novel SGD-type algorithms for AUC maximization. In particular, we establish min-max optimal rates for the existing AUC maximization algorithms using stability and generalization trade-off framework.   Then, we explore the problem structures to develop fast stochastic optimization algorithms for AUC maximization with least-square loss.  The main idea is to reformulate it as a stochastic saddle-point (min-max) problem. Furthermore, we extend the developed saddle-point formulation to the setting of general losses by Bernstein polynomial approximation, and  to the setting of deep neural network.  This talk is based on a series of joint work with Michael Natole, Neyo Yang, Wei Shen, Siwei Lyu, and Tianbao Yang.

联系人:郭正初(guozhengchu@zju.edu.cn