机器学习理论之狂欢 COLT2020

机器学习理论之狂欢 COLT 2020

The 33rd Annual Conference on Learning Theory (COLT 2020)

Country: Austria City: Graz

线上举办

Abstr. due: 31.01.2020 Dates: 09.07.20 — 12.07.20

Area Of Sciences: Pedagogy;

Organizing comittee e-mail: TBA here: http://learningtheory.org/colt2020/.

Organizers: Association for Computational Learning

The 33rd Annual Conference on Learning Theory (COLT 2020) will take place in Graz, Austria online during July 9-12, 2020. We invite submissions of papers addressing theoretical aspects of machine learning and related topics. We strongly support a broad definition of learning theory, including, but not limited to:

  • Design and analysis of learning algorithms
  • Statistical and computational complexity of learning
  • Optimization methods for learning, and/or online and/or stochastic optimization
  • Supervised learning
  • Unsupervised and semi-supervised learning
  • Active and interactive learning
  • Reinforcement learning
  • Online learning and decision-making
  • Interactions of learning theory with other mathematical fields
  • Theory of artificial neural networks, including (theory of) deep learning
  • High-dimensional and non-parametric statistics
  • Learning with algebraic or combinatorial structure
  • Theoretical analysis of probabilistic graphical models
  • Bayesian methods in learning
  • Game theory and learning
  • Learning with system constraints (e.g., privacy, fairness, memory, communication)
  • Learning from complex data (e.g., networks, time series)
  • Learning in other settings (e.g., computational social science, economics)

今年 COLT 同样采取了虚拟会议的方式进行,这种方式已经成为了 20s 新常态,借用 David Hilber 的名言:我们必须熟悉,我们必将熟悉。

先看看本次会议的 CFP:

  • 学习算法的设计与分析 Design and analysis of learning algorithms
  • 学习的统计和计算复杂性 Statistical and computational complexity of learning
  • 学习的(在线/随机)优化方法 Optimization methods for learning, and/or online and/or stochastic optimization
  • 监督学习 Supervised learning
  • 无监督/半监督学习 Unsupervised and semi-supervised learning
  • 主动学习和交互学习 Active and interactive learning
  • 强化学习 Reinforcement learning
  • 在线学习和决策制定 Online learning and decision-making
  • 学习理论与其他数学领域的交叉 Interactions of learning theory with other mathematical fields
  • 人工神经网络理论,包含深度学习理论 Theory of artificial neural networks, including (theory of) deep learning
  • 高维统计与无参数统计 High-dimensional and non-parametric statistics
  • 有代数或者组合结构的学习 Learning with algebraic or combinatorial structure
  • 概率图模型的理论分析 Theoretical analysis of probabilistic graphical models
  • 学习中的贝叶斯方法 Bayesian methods in learning
  • 博弈论与学习 Game theory and learning
  • 有系统限制(如隐私,公平性,记忆,通讯等方面)的学习 Learning with system constraints (e.g., privacy, fairness, memory, communication)
  • 从复杂数据(如网络,时间序列)中学习 Learning from complex data (e.g., networks, time series)
  • 在其他设定(如计算社会科学、经济学等)下的学习 Learning in other settings (e.g., computational social science, economics)

接受的论文:

  • Locally Private Hypothesis Selection
    Sivakanth Gopi, Gautam Kamath, Janardhan D Kulkarni, Aleksandar Nikolov, Steven Wu, Huanyu Zhang
  • Differentially Private Mean Estimation of Heavy-Tailed Distributions
    Gautam Kamath, Vikrant Singhal, Jonathan Ullman
  • An O(m/eps^3.5)-Cost Algorithm for Semidefinite Programs with Diagonal Constraints
    Swati Padmanabhan, Yin Tat Lee
  • Adaptive Submodular Maximization under Stochastic Item Costs
    Srinivasan Parthasarathy
  • Gradient descent algorithms for Bures-Wasserstein barycenters
    Sinho Chewi, Philippe Rigollet, Tyler Maunu, Austin Stromme
  • On the gradient complexity of linear regression
    Elad Hazan, Mark Braverman, Max Simchowitz, Blake E Woodworth
  • Improper Learning for Non-Stochastic Control
    Max Simchowitz, Karan Singh, Elad Hazan
  • Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank
    Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou
  • No-Regret Prediction in Marginally Stable Systems
    Udaya Ghai, Holden Lee, Karan Singh, Cyril Zhang, Yi Zhang
  • Halpern Iteration for Near-Optimal and Parameter-Free Monotone Inclusion and Strong Solutions to Variational Inequalities
    Jelena Diakonikolas
  • PAC learning with stable and private predictions
    Yuval Dagan, Vitaly Feldman
  • Learning a Single Neuron with Gradient Methods
    Gilad Yehudai, Ohad Shamir
  • Universal Approximation with Deep Narrow Networks
    Patrick Kidger, Terry J Lyons
  • Asymptotic Errors for High-Dimensional Convex Penalized Linear Regression beyond Gaussian Matrices
    Alia Abbara, Florent Krzakala, Cedric Gerbelot
  • On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems
    Dan Garber
  • From Nesterov’s Estimate Sequence to Riemannian Acceleration
    Kwangjun Ahn, Suvrit Sra
  • Selfish Robustness and Equilibria in Multi-Player Bandits
    Etienne Boursier, Vianney Perchet
  • How Good is SGD with Random Shuffling?
    Itay M Safran, Ohad Shamir
  • Exploration by Optimisation in Partial Monitoring
    Tor Lattimore, Csaba Szepesvari
  • Extending Learnability to Auxiliary-Input Cryptographic Primitives and Meta-PAC Learning
    Mikito Nanashima
  • Noise-tolerant, Reliable Active Classification with Comparison Queries
    Max Hopkins, Shachar Lovett, Daniel Kane, Gaurav Mahajan
  • Sharper Bounds for Uniformly Stable Algorithms
    Olivier Bousquet, Yegor Klochkov, Nikita Zhivotovskiy
  • Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes
    Alekh Agarwal, Sham Kakade, Jason Lee, Gaurav Mahajan
  • Consistent recovery threshold of hidden nearest neighbor graphs
    Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang
  • High probability guarantees for stochastic convex optimization
    Damek Davis, Dmitriy Drusvyatskiy
  • Information Directed Sampling for Linear Partial Monitoring
    Johannes Kirschner, Tor Lattimore, Andreas Krause
  • ID3 Learns Juntas for Smoothed Product Distributions
    Eran Malach, Amit Daniely, Alon Brutzkus
  • Tight Lower Bounds for Combinatorial Multi-Armed Bandits
    Nadav Merlis, Shie Mannor
  • Domain Compression and its Application to Randomness-Optimal Distributed Goodness-of-Fit
    Jayadev Acharya, Clement L Canonne, Yanjun Han, Ziteng Sun, Himanshu Tyagi
  • Reasoning About Generalization via Conditional Mutual Information
    Thomas Steinke, Lydia Zakynthinou
  • A Greedy Anytime Algorithm for Sparse PCA
    Dan Vilenchik, Adam Soffer, Guy Holtzman
  • Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo
    Yin Tat Lee, Ruoqi Shen, Kevin Tian
  • Provably Efficient Reinforcement Learning with Linear Function Approximation
    Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael Jordan
  • A Fast Spectral Algorithm for Mean Estimation with Sub-Gaussian Rates
    Zhixian Lei, Kyle Luh, Prayaag Venkat, Fred Zhang
  • How to trap a gradient flow
    Dan Mikulincer, Sebastien Bubeck
  • Near-Optimal Algorithms for Minimax Optimization
    Tianyi Lin, Chi Jin, Michael Jordan
  • Model-Based Reinforcement Learning with a Generative Model is Minimax Optimal
    Alekh Agarwal, Sham Kakade, Lin Yang
  • Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise
    Maksim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai
  • Fast Rates for Online Prediction with Abstention
    Gergely Neu, Nikita Zhivotovskiy
  • Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
    YICHUN HU, Nathan Kallus, Xiaojie Mao
  • Data-driven confidence bands for distributed nonparametric regression
    Valeriy Avanesov
  • Tsallis-INF for Decoupled Exploration and Exploitation in Multi-armed Bandits
    Chloé Rouyer, Yevgeny Seldin
  • Pan-Private Uniformity Testing
    Kareem Amin, Matthew Joseph, Jieming Mao
  • ODE-Inspired Analysis for the Biological Version of Oja’s Rule in Solving Streaming PCA
    Mien Brabeeba Wang, Chi-Ning Chou
  • Complexity Guarantees for Polyak Steps with Momentum
    Mathieu Barre, Adrien B Taylor, Alexandre d’Aspremont
  • Calibrated Surrogate Losses for Adversarially Robust Classification
    Han Bao, Clayton Scott, Masashi Sugiyama
  • Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
    Oliver Hinder, Aaron Sidford, Nimit S Sohoni
  • Faster Projection-free Online Learning
    Edgar Minasyan, Elad Hazan
  • Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without
    Sebastien Bubeck, Yuanzhi Li, Yuval Peres, Mark Sellke
  • Coordination without communication: optimal regret in two players multi-armed bandits
    Sebastien Bubeck, Thomas Budzinski
  • EM Algorithm is Sample-Optimal for Learning Mixtures of Well-Separated Gaussians
    Jeongyeol Kwon, Constantine Caramanis
  • Online Learning with Vector Costs and Bandits with Knapsacks
    Thomas Kesselheim, Sahil Singla
  • Better Algorithms for Estimating Non-Parametric Models in Crowd-Sourcing and Rank Aggregation
    Allen X Liu, Ankur Moitra
  • Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding
    Xiaotong Yuan, Ping Li
  • Learning Halfspaces with Massart Noise Under Structured Distributions
    Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
  • Rigorous Guarantees for Tyler’s M-Estimator via Quantum Expansion
    William C Franks, Ankur Moitra
  • Active Learning for Identification of Linear Dynamical Systems
    Andrew J Wagenmaker, Kevin Jamieson
  • Bounds in query learning
    Hunter S Chase, James Freitag
  • Active Local Learning
    Arturs Backurs, Avrim Blum, Neha Gupta
  • Kernel and Rich Regimes in Overparametrized Models
    Blake E Woodworth, Suriya Gunasekar, Jason Lee, Edward Moroshko, Pedro Henrique Pamplona Savarese, Itay Golan, Daniel Soudry, Nathan Srebro
  • Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium
    Qiaomin Xie, Yudong Chen, Zhaoran Wang, Zhuoran Yang
  • Finite-Time Analysis of Asynchronous Stochastic Approximation and $Q$-Learning
    Guannan Qu, Adam Wierman
  • Parallels Between Phase Transitions and Circuit Complexity?
    Colin P Sandon, Ankur Moitra, Elchanan Mossel
  • Hierarchical Clustering: A 0.585 Revenue Approximation
    Noga Alon, Yossi Azar, Danny Vainstein
  • Gradient descent follows the regularization path for general losses
    Ziwei Ji, Miroslav Dudik, Robert Schapire, Matus Telgarsky
  • Bessel Smoothing and Multi-Distribution Property Estimation
    Yi Hao, Ping Li
  • Privately Learning Thresholds: Closing the Exponential Gap
    Uri Stemmer, Moni Naor, Haim Kaplan, Yishay Mansour, Katrina Ligett
  • Pessimism About Unknown Unknowns Inspires Conservatism
    Michael K Cohen, Marcus Hutter
  • The Influence of Shape Constraints on the Thresholding Bandit Problem
    James Cheshire, Pierre Menard, Alexandra Carpentier
  • Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent
    James P Bailey, Gauthier Gidel, Georgios Piliouras
  • Efficient and robust algorithms for adversarial linear contextual bandits
    Gergely Neu, Julia Olkhovskaya
  • Non-asymptotic Analysis for Nonparametric Testing
    Yun Yang, Zuofeng Shang, Guang Cheng
  • Tree-projected gradient descent for estimating gradient-sparse parameters on graphs
    Sheng Xu, Zhou Fan, Sahand Negahban
  • Covariance-adapting algorithm for semi-bandits with application to sparse rewards
    Pierre Perrault, Vianney Perchet, Michal Valko
  • Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems
    Noah Golowich, Sarath Pattathil, Constantinos Daskalakis, Asuman Ozdaglar
  • A Corrective View of Neural Networks: Representation, Memorization and Learning
    Dheeraj M Nagaraj, Guy Bresler
  • Learning Entangled Single-Sample Gaussians in the Subset-of-Signals Model
    Yingyu Liang, Hui Yuan
  • Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
    Lénaïc Chizat, Francis Bach
  • Dimension-Free Bounds for Chasing Convex Functions
    Guru Guruganesh, Anupam Gupta, Charles Argue
  • Optimal group testing
    Oliver Gebhard, Philipp Loick, Maximilian Hahn-Klimroth, Amin Coja-Oghlan
  • On Suboptimality of Least Squares with Application to Estimation of Convex Bodies
    Gil Kur, Alexander Rakhlin, Adityanand Guntuboyina
  • A Nearly Optimal Variant of the Perceptron Algorithm for the Uniform Distribution on the Unit Sphere
    Marco Schmalhofer
  • A Closer Look at Small-loss Bounds for Bandits with Graph Feedback
    Chung-Wei Lee, Haipeng Luo, Mengxiao Zhang
  • Extrapolating the profile of a finite population
    Yihong Wu, Yury Polyanskiy, Soham Jana
  • Balancing Gaussian vectors in high dimension
    Paxton M Turner, Raghu Meka, Philippe Rigollet
  • On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels
    Tengyuan Liang, Alexander Rakhlin, Xiyu Zhai
  • From tree matching to sparse graph alignment
    Luca Ganassali, Laurent Massoulie
  • Estimating Principal Components under Adversarial Perturbations
    Pranjal Awasthi, Xue Chen, Aravindan Vijayaraghavan
  • Logistic Regression Regret: What’s the Catch?
    Gil I Shamir
  • Efficient, Noise-Tolerant, and Private Learning via Boosting
    Mark Bun, Marco L Carmosino, Jessica Sorrell
  • Highly smooth minimization of non-smooth problems
    Brian Bullins
  • Proper Learning, Helly Number, and an Optimal SVM Bound
    Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy
  • Efficient Parameter Estimation of Truncated Boolean Product Distributions
    Dimitris Fotakis, Alkis Kalavasis, Christos Tzamos
  • Estimation and Inference with Trees and Forests in High Dimensions
    Vasilis Syrgkanis, Emmanouil Zampetakis
  • Closure Properties for Private Classification and Online Prediction
    Noga Alon, Amos Beimel, Shay Moran, Uri Stemmer
  • New Potential-Based Bounds for Prediction with Expert Advice
    Vladimir A Kobzar, Robert Kohn, Zhilei Wang
  • Distributed Signal Detection under Communication Constraints
    Jayadev Acharya, Clement L Canonne, Himanshu Tyagi
  • Hardness of Identity Testing for Restricted Boltzmann Machines and Potts models
    Antonio Blanca, Zongchen Chen, Eric Vigoda, Daniel Stefankovic
  • Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
    Ilias Diakonikolas, Daniel M Kane, Vasilis Kontonis, Nikos Zarifis
  • Costly Zero Order Oracles
    Renato Paes Leme, Jon Schneider
  • Precise Tradeoffs in Adversarial Training for Linear Regression
    Adel Javanmard, Mahdi Soltanolkotabi, Hamed Hassani
  • Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
    Pritish Kamath, Omar Montasser, Nathan Srebro
  • Wasserstein Control of Mirror Langevin Monte Carlo
    Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, Marcelo Pereyra
  • The estimation error of general first order methods
    Michael V Celentano, Andrea Montanari, Yuchen Wu
  • Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
    Yossi Arjevani, Yair Carmon, John Duchi, Dylan Foster, Ayush Sekhari, Karthik Sridharan
  • Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
    Guy Blanc, Neha Gupta, Gregory Valiant, Paul Valiant
  • Free Energy Wells and Overlap Gap Property in Sparse PCA
    Ilias Zadik, Alexander S. Wein, Gerard Ben Arous
  • Robust causal inference under covariate shift via worst-case subpopulation treatment effects
    Sookyo Jeong, Hongseok Namkoong
  • Winnowing with Gradient Descent
    Ehsan Amid, Manfred K. Warmuth
  • Embedding Dimension of Polyhedral Losses
    Jessica J Finocchiaro, Rafael Frongillo, Bo Waggoner
  • List Decodable Subspace Recovery
    Morris Yau, Prasad Raghavendra
  • Approximation Schemes for ReLU Regression
    Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam Klivans, Mahdi Soltanolkotabi
  • Learning Over-parametrized Two-layer ReLU Neural Networks beyond NTK
    Yuanzhi Li, Tengyu Ma, Hongyang R Zhang
  • Fine-grained Analysis for Linear Stochastic Approximation with Averaging: Polyak-Ruppert, Non-asymptotic Concentration and Beyond
    Wenlong Mou, Chris Junchi Li, Martin Wainwright, Peter Bartlett, Michael Jordan
  • Learning Polynomials in Few Relevant Dimensions
    Sitan Chen, Raghu Meka
  • Efficient improper learning for online logistic regression
    Pierre Gaillard, Rémi Jézéquel, Alessandro Rudi
  • Lipschitz and Comparator-Norm Adaptivity in Online Learning
    Zakaria Mhammedi, Wouter M Koolen
  • Information Theoretic Optimal Learning of Gaussian Graphical Models
    Sidhant Misra, Marc D Vuffray, Andrey Lokhov
  • Reducibility and Statistical-Computational Gaps from Secret Leakage
    Matthew S Brennan, Guy Bresler
  • Taking a hint: How to leverage loss predictors in contextual bandits?
    Chen-Yu Wei, Haipeng Luo, Alekh Agarwal

主旨演讲:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s