Tailin Wu, Willie Neiswanger, Hongtao Zheng, Stefano Ermon, Jure Leskovec. Uncertainty Quantification for Forward and Inverse Problems of PDEs via Latent Global Evolution. In AAAI 2024.
Defu Cao, Yixiang Zheng, Parisa Hassanzadeh, Simran Lamba, Xiaomo Liu, Yan Liu. Large Scale Financial Time Series Forecasting with Multi-faceted Model. In ICAIF 2023.
Yizhou Zhang, Jingchao Ni, Wei Cheng, Zhengzhang Chen, Liang Tong, Heifeng Chen, Yan Liu. Hierarchical Gaussian Mixture based Task Generative Model for Robust Meta-Learning. In NeurIPS 2023.
Yizhou Zhang, Karishma Sharma, Yan Liu. Capturing Cross-Platform Interaction for Identifying Coordinated Accounts of Misinformation Campaigns. In ICAIF 2023.
Loc Trinh, Tim Chu, Zijun Cui, Anand Malpani, Cherine Yang, Istabraq Dalieh, Alvin Hui, Oscar Gomez, Andrew Hung, Yan Liu. Self-supervised Sim-to-Real Kinematics Reconstruction for Video-based Assessment of Intraoperative Suturing Skills. In MICCAI 2023.
Emily Nguyen, Zijun Cui, Georgia Kokaraki, Joseph Carlson, Yan Liu. Transferable and Interpretable Treatment Effectiveness Prediction for Ovarian Cancer via Multimodal Deep Learning. In AMIA 2023.
Defu Cao, Zhaowen Wang, Jose Echevarria, Yan Liu. SVGformer: Representation Learning for Continuous Vector Graphics using Transformers. In CVPR 2023.
Xiongye Xiao, Defu Cao, Ruochen Yang, Gaurav Gupta, Gengshuo Liu, Chenzhong Yin, Radu Balan, Paul Bogdan. Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations. In ICLR 2023.
Hao Niu, Guillaume Habault, Roberto Legaspi, Chuizheng Meng, Defu Cao, Shinya Wada, Chihiro Ono, Yan Liu. Time-delayed Multivariate Time Series Predictions. In SDM 2023.
Gabriele Farina, Julien Grand-Clément, Christian Kroer, Chung-Wei Lee, and Haipeng Luo. Regret Matching+: (In)Stability and Fast Convergence in Games. In NeurIPS 2023.
Tiancheng Jin, Junyan Liu, Chloé Rouyer, William Chang, Chen-Yu Wei, and Haipeng Luo. No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. In NeurIPS 2023.
Tiancheng Jin, Junyan Liu, and Haipeng Luo. Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms. In NeurIPS 2023.
Yang Cai, Haipeng Luo, Chen-Yu Wei, and Weiqiang Zheng. Uncoupled and Convergent Learning in Two-Player Zero-Sum Markov Games. In NeurIPS 2023.
Mengxiao Zhang, Yuheng Zhang, Olga Vrousgou, Haipeng Luo, and Paul Mineiro. Practical Contextual Bandits with Feedback Graphs. In NeurIPS 2023.
Yan Dai, Haipeng Luo, Chen-Yu Wei, and Julian Zimmert. Refined Regret for Adversarial MDPs with Linear Function Approximation. In ICML 2023.
Haipeng Luo, Hanghang Tong, Mengxiao Zhang, and Yuheng Zhang. Improved High-Probability Regret for Adversarial Bandits with Time-Varying Feedback Graphs. In ALT 2023.
Mengxiao Zhang, Shi Chen, Haipeng Luo, and Yingfei Wang. No-Regret Learning in Two-Echelon Supply Chain with Unknown Demand Distribution. In AISTATS 2023.
Mehdi Jafarnia-Jahromi, Liyu Chen, Rahul Jain, and Haipeng Luo. Posterior Sampling-based Online Learning for the Stochastic Shortest Path Model. In UAI 2023.
Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, Ming-Hsuan Yang. Diffusion Models: A Comprehensive Survey of Methods and Applications. In ACM Computing Surveys 2023.
Minqi Jiang, Chaochuan Hou, Ao Zheng, Songqiao Han, Hailiang Huang, Qingsong Wen, Xiyang Hu, Yue Zhao. ADGym: Design Choices for Deep Anomaly Detection. In NeurIPS 2023.
Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu. Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks. In ICML 2023.
Yue Zhao, George H Chen, Zhihao Jia. TOD: GPU-accelerated Outlier Detection via Tensor Operations. In VLDB 2023.
Yue Zhao, Guoqing Zheng, Subhabrata Mukherjee, Robert McCann, Ahmed Awadallah. ADMoE: Anomaly detection with mixture-of-experts from noisy labels. In AAAI 2023.
Sang Keun Choe, Sanket Vaibhav Mehta, Hwijeen Ahn, Willie Neiswanger, Pengtao Xie, Emma Strubell, Eric Xing. Making Scalable Meta Learning Practical. In NeurIPS 2023.
Ye Yuan, Can Chen, Zixuan Liu, Willie Neiswanger, Xue Liu. Importance-aware co-teaching for offline model-based optimization. In NeurIPS 2023.
Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, Ilija Bogunovic. Near-optimal Policy Identification in Active Reinforcement Learning. In ICLR 2023.
Sang Keun Choe, Willie Neiswanger, Pengtao Xie, Eric Xing. Betty: An automatic differentiation library for multilevel optimization. In ICLR 2023.
Benedikt Boecking, Nicholas Roberts, Willie Neiswanger, Stefano Ermon, Frederic Sala, Artur Dubrawski. Generative Modeling Helps Weak Supervision (and Vice Versa). In ICLR 2023.
Lantao Yu, Tianhe Yu, Jiaming Song, Willie Neiswanger, Stefano Ermon. Offline imitation learning with suboptimal demonstrations via relaxed distribution matching. In AAAI 2023.
Sara Miskovich, Willie Neiswanger, William Colocho, Claudio Emma, Jacqueline Garrahan, Timothy Maxwell, Christopher Mayes, Stefano Ermon, Auralee Edelen, Daniel Ratner. Multipoint-BAX: A New Approach for Efficiently Tuning Particle Accelerator Emittance via Virtual Objectives. In MLST 2023.
Wang Zhu, Jesse Thomason, and Robin Jia. Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering. In EMNLP 2023.
Deqing Fu, Ameya Godbole, and Robin Jia. SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples. In EMNLP 2023.
Harvey Yiyun Fu, Qinyuan Ye, Albert Xu, Xiang Ren, and Robin Jia. Estimating Large Language Model Capabilities without Labeled Test Data. In EMNLP Findings 2023.
Qinyuan Ye, Harvey Yiyun Fu, Xiang Ren, and Robin Jia. How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench. In EMNLP Findings 2023.
Ting-Yun Chang and Robin Jia. Data Curation Alone Can Stabilize In-context Learning. In ACL 2023.
Albert Xu, Xiang Ren, and Robin Jia. Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large Language Models. In ACL 2023.
Nelson F. Liu, Ananya Kumar, Percy Liang, and Robin Jia. Are Sample-Efficient NLP Models More Robust? In ACL 2023.
Nelson F. Liu, Tony Lee, Robin Jia, and Percy Liang. Do Question Answering Modeling Improvements Hold Across Benchmarks? In ACL 2023.
Wang Zhu, Ishika Singh, Yuan Huang, Robin Jia, and Jesse Thomason. Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions? In O-DRUM 2023.
Ameya Godbole and Robin Jia. Benchmarking Long-tail Generalization with Likelihood Splits. In EACL Findings 2023.
James Enouen, Yan Liu. Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection. In NeurIPS 2022.
Chuizheng Meng, Hao Niu, Guillaume Habault, Roberto Legaspi, Shinya Wada, Chihiro Ono, Yan Liu. Physics-Informed Long-Sequence Spatiotemporal Forecasting with Multi-Resolution Data. In IJCAI 2022.
Defu Cao, Yousef El-Laham, Loc Trinh, Svitlana Vyetrenko, Yan Liu. DSLOB: A Synthetic Limit Order Book Dataset for Benchmarking Forecasting Algorithms under Distributional Shift. In NeurIPS 2022.
Defu Cao, James Enouen, Yan Liu. Estimating Treatment Effects in Continuous Time with Hidden Confounders. In ICML 2022.
Chuizheng Meng, Sungyong Seo, Defu Cao, Sam Griesemer, Yan Liu. When Physics Meets Machine Learning: A Survey of Physics-Informed Machine Learning. In Arxiv 2022.
Xianhua Liu, Defu Cao, Qinshu Chen. Hardware Reusability Optimization for High-Level Synthesis of Component-Based Processors. In IEEE ICCCAS 2022.
Yizhou Zhang, Defu Cao, Yan Liu. Counterfactual Neural Temporal Point Process for Estimating Causal Influence of Misinformation on Social Media. In NeurIPS 2022.
Jiangang Bai, Yujing Wang, Hong Sun, Ruonan Wu, Tianmeng Yang, Pengfei Tang, Wei Shen, Defu Cao, Mingliang Zhang, Yaming Yang, Jing Bai, Yunhai Tong, Hao Sun, Ruofei Zhang. Enhancing Self-Attention with Knowledge-Assisted Attention Maps. In NAACL 2022.
Hao Niu, Chuizheng Meng, Defu Cao, Guillaume Habault, Roberto Legaspi, Shinya Wada, Chihiro Ono, Yan Liu. Mu2ReST: Multi-Resolution Recursive Spatio-Temporal Transformer for Long-Term Prediction. In PAKDD 2022.
Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, Marinka Zitnik. Artificial intelligence foundation for therapeutic science. In Nature Chemical Biology 2022.
Kay Liu*, Yingtong Dou*, Yue Zhao*, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H Chen, Zhihao Jia, Philip S Yu. BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs. In NeurIPS 2022.
Songqiao Han, Xiyang Hu, Hailiang Huang, Minqi Jiang, Yue Zhao. ADBench: Anomaly Detection Benchmark. In NeurIPS 2022.
Zheng Li*, Yue Zhao*, Xiyang Hu, Nicola Botta, Cezar Ionescu, George Chen. Ecod: Unsupervised outlier detection using empirical cumulative distribution functions. In TKDE 2022.
Yuxin Xiao, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency. Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis. In EMNLP Findings 2022.
Viraj Mehta, Ian Char, Joseph Abbate, Rory Conlin, Mark D Boyer, Stefano Ermon, Jeff Schneider, Willie Neiswanger. Exploration via planning for information about the optimal trajectory. In NeurIPS 2022.
Willie Neiswanger, Lantao Yu, Shengjia Zhao, Chenlin Meng, Stefano Ermon. Generalizing Bayesian Optimization with Decision-theoretic Entropies. In NeurIPS 2022.
Jiaming Song, Lantao Yu, Willie Neiswanger, Stefano Ermon. A general recipe for likelihood-free Bayesian optimization. In ICML 2022.
Charles Marx, Shengjia Zhao, Willie Neiswanger, Stefano Ermon. Modular conformal calibration. In ICML 2022.
Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, Willie Neiswanger. An Experimental Design Perspective on Model-Based Reinforcement Learning. In ICLR 2022.
Chenlin Meng, Enci Liu, Willie Neiswanger, Jiaming Song, Marshall Burke, David Lobell, Stefano Ermon. Is-count: Large-scale object counting from satellite images with covariate-based importance sampling. In AAAI 2022.
Max E Fenstermacher et al., including Willie Neiswanger. DIII-D research advancing the physics basis for optimizing the tokamak approach to fusion energy. In Nuclear Fusion 2022.
Wang Zhu, Jesse Thomason, and Robin Jia. Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems. In EMNLP Findings 2022.
Rajarshi Das, Ameya Godbole, Ankita Naik, Elliot Tower, Manzil Zaheer, Hannaneh Hajishirzi, Robin Jia, and Andrew McCallum. Knowledge base question answering by case-based reasoning over subgraphs. In ICML 2022.
Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. On the Robustness of Reading Comprehension Models to Entity Renaming. In NAACL 2022.
Max Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, and Douwe Kiela. Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. In NAACL 2022.
Robin Jia, Mike Lewis, and Luke Zettlemoyer. Question Answering Infused Pre-training of General-Purpose Contextualized Representations. In ACL Findings 2022.
Eric Wallace, Adina Williams, Robin Jia, and Douwe Kiela. Analyzing Dynamic Adversarial Training Data in the Limit. In ACL Findings 2022.
Bill Yuchen Lin, Sida Wang, Xi Victoria Lin, Robin Jia, Lin Xiao, Xiang Ren, and Scott Yih. On Continual Model Refinement in Out-of-Distribution Data Streams. In ACL 2022.
Ioannis Anagnostides, Gabriele Farina, Christian Kroer, Chung-Wei Lee, Haipeng Luo, and Tuomas Sandholm. Uncoupled Learning Dynamics with O(log T) Swap Regret in Multiplayer Games. In NeurIPS 2022.
Gabriele Farina, Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, and Tuomas Sandholm. Near-Optimal No-Regret Learning for General Convex Games. In NeurIPS 2022.
Yan Dai, Haipeng Luo, and Liyu Chen. Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback. In NeurIPS 2022.
Tiancheng Jin, Tal Lancewicki, Haipeng Luo, Yishay Mansour, and Aviv Rosenberg. Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback. In NeurIPS 2022.
Liyu Chen and Haipeng Luo. Near-Optimal Goal-Oriented Reinforcement Learning in Non-Stationary Environments. In NeurIPS 2022.
Liyu Chen, Haipeng Luo, and Aviv Rosenberg. Policy Optimization for Stochastic Shortest Path. In COLT 2022.
Haipeng Luo, Mengxiao Zhang, Peng Zhao, and Zhi-Hua Zhou. Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. In COLT 2022.
Haipeng Luo, Mengxiao Zhang, and Peng Zhao. Adaptive Bandit Convex Optimization with Heterogeneous Curvature. In COLT 2022.
Liyu Chen, Rahul Jain, and Haipeng Luo. Improved No-Regret Algorithms for Stochastic Shortest Path with Linear MDP. In ICML 2022.
Liyu Chen, Rahul Jain, and Haipeng Luo. Learning Infinite-Horizon Average-Reward Markov Decision Processes with Constraints. In ICML 2022.
Mengxiao Zhang, Peng Zhao, Haipeng Luo, and Zhi-Hua Zhou. No-Regret Learning in Time-Varying Zero-Sum Games. In ICML 2022.
Gabriele Farina, Chung-Wei Lee, Haipeng Luo, and Christian Kroer. Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games. In ICML 2022.
Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Adina Williams, and Douwe Kiela. Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking. In NeurIPS 2021.
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little. In EMNLP 2021.
Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela. Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation. In EMNLP 2021.
Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, and Adina Williams. To What Extent do Human Explanations of Model Behavior Align with Actual Model Behavior? In BlackBoxNLP 2021.
Johnny Tian-Zheng Wei and Robin Jia. The Statistical Advantage of Automatic NLG Metrics at the System Level. In ACL 2021.
Pedro Rodriguez, Joe Barrow, Alexander Hoyle, John P. Lalor, Robin Jia, and Jordan Boyd-Graber. Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards? In ACL 2021.
Ana Valeria Gonzalez, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations. In ACL Findings 2021.
Mina Lee, Chris Donahue, Robin Jia, Alexander Iyabor, and Percy Liang. Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality. In NAACL 2021.
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. Dynabench: Rethinking Benchmarking in NLP. In NAACL 2021.
Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, Andrew Nelson, Mark Boyer, Egemen Kolemen, Jeff Schneider. Neural Dynamical Systems: Balancing Structure and Flexibility in Physical Prediction. In CDC 2021.
Youngseog Chung, Willie Neiswanger, Ian Char, Jeff Schneider. Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification. In NeurIPS 2021.
Avanika Narayan, Piero Molino, Karan Goel, Willie Neiswanger, Christopher Re. Personalized Benchmarking with the Ludwig Benchmarking Toolkit. In NeurIPS 2021.
Yang Liu, Sujay Khandagale, Colin White, Willie Neiswanger. Synthetic Benchmarks for Scientific Research in Explainable Machine Learning. In NeurIPS 2021.
Willie Neiswanger, Ke Alexander Wang, Stefano Ermon. Bayesian Algorithm Execution: Estimating Computable Properties of Black-box Functions Using Mutual Information. In ICML 2021.
Aurick Qiao, Sang Keun Choe, Suhas Jayaram Subramanya, Willie Neiswanger, Qirong Ho, Hao Zhang, Gregory R Ganger, Eric P Xing. Pollux: Co-adaptive Cluster Scheduling for Goodput-optimized Deep Learning. In OSDI 2021.
Willie Neiswanger, Aaditya Ramdas. Uncertainty Quantification Using Martingales for Misspecified Gaussian Processes. In ALT 2021.
Kevin Tran, Willie Neiswanger, Kirby Broderick, Eric Xing, Jeff Schneider, Zachary W Ulissi. Computational Catalyst Discovery: Active Classification through Myopic Multiscale Sampling. In Journal of Chemical Physics 2021.
Benedikt Boecking, Willie Neiswanger, Eric Xing, Artur Dubrawski. Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling. In ICLR 2021.
Colin White, Willie Neiswanger, Yash Savani. Bananas: Bayesian Optimization with Neural Architectures for Neural Architecture Search. In AAAI 2021.
Yue Zhao, Ryan Rossi, Leman Akoglu. Automatic Unsupervised Outlier Model Selection. In NeurIPS 2021.
Kexin Huang*, Tianfan Fu*, Wenhao Gao*, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, Marinka Zitnik. Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development. In NeurIPS 2021.
Kwei-Herng Lai, Daochen Zha, Junjie Xu, Yue Zhao, Guanchu Wang, Xia Hu. Revisiting Time Series Outlier Detection: Definitions and Benchmarks. In NeurIPS 2021.
Yue Zhao*, Xiyang Hu*, Cheng Cheng, Cong Wang, Changlin Wan, Wen Wang, Jianing Yang, Haoping Bai, Zheng Li, Cao Xiao, Yunlong Wang, Zhi Qiao, Jimeng Sun, Leman Akoglu. SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier Detection. In MLSys 2021.
Tiancheng Jin, Longbo Huang, and Haipeng Luo. The Best of Both Worlds: Stochastic and Adversarial Episodic MDPs with Unknown Transition. In NeurIPS 2021.
Haipeng Luo, Chen-Yu Wei, and Chung-Wei Lee. Policy Optimization in Adversarial MDPs: Improved Exploration via Dilated Bonuses. In NeurIPS 2021.
Liyu Chen, Mehdi Jafarnia-Jahromi, Rahul Jain, and Haipeng Luo. Implicit Finite-Horizon Approximation and Efficient Optimal Algorithms for Stochastic Shortest Path. In NeurIPS 2021.
Chung-Wei Lee, Christian Kroer, and Haipeng Luo. Last-iterate Convergence in Extensive-Form Games. In NeurIPS 2021.
Chen-Yu Wei and Haipeng Luo. Non-stationary Reinforcement Learning without Prior Knowledge: An Optimal Black-box Approach. In COLT 2021.
Liyu Chen, Haipeng Luo, and Chen-Yu Wei. Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications. In COLT 2021.
Liyu Chen, Haipeng Luo, and Chen-Yu Wei. Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition. In COLT 2021.
Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games. In COLT 2021.
Liyu Chen and Haipeng Luo. Finding the Stochastic Shortest Path with Low Regret: The Adversarial Cost and Unknown Transition Case. In ICML 2021.
Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang, and Xiaojin Zhang. Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously. In ICML 2021.
Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Linear Last-iterate Convergence in Constrained Saddle-point Optimization. In ICLR 2021.
Chen-Yu Wei, Mehdi Jafarnia-Jahromi, Haipeng Luo, and Rahul Jain. Learning Infinite-horizon Average-reward MDPs with Linear Function Approximation. In AISTATS 2021.
Yining Chen, Haipeng Luo, Tengyu Ma, and Chicheng Zhang. Active Online Learning with Hidden Shifting Domains. In AISTATS 2021.
Ehsan Emamjomeh-Zadeh, Chen-Yu Wei, Haipeng Luo, and David Kempe. Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds. In ALT 2021.
C. Fu, T. Chen, M. Qu, W. Jin, and X. Ren. Collaborative Policy Learning for Open Knowledge Graph Reasoning. In EMNLP 2019.
D. Deng, C. Shahabi, U. Demiryurek, L. Zhu, R. Yu and Y. Liu. Latent Space Model for Road Networks to Predict Time-Varying Traffic. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2016), 2016
R. Ge, J. D. Lee, and T. Ma. Matrix Completion has No Spurious Local Minima. In Advances in Neural Information Processing Systems (NIPS 2016), 2016
D. Cheng, R. Peng, I. Perros, and Y. Liu. SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling. In Advances in Neural Information Processing Systems (NIPS 2016), 2016
S. Gao, G. Ver Steeg, and A. Galstyan. Variational Information Maximization for Feature Selection. In Advances in Neural Information Processing Systems (NIPS 2016), 2016
X. He, K. Xu, D. Kempe and Y. Liu. Learning Influence Functions from Incomplete Observations. In Advances in Neural Information Processing Systems (NIPS), 2016
J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht. Gradient Descent Converges to Minimizers. In Conference on Learning Theory (COLT 2016), 2016
I. Diakonikolas, D. Kane, A. Stewart . Optimal Learning via the Fourier Transform for Sums of Independent Integer Random Variables. In Conference on Learning Theory (COLT 2016), 2016
I. Diakonikolas, G. Kamath, D. Kane, J. Li, A. Moitra, A. Stewart . Robust Estimators in High Dimensions without the Computational Intractability. In IEEE Symposium on Foundations of Computer Science (FOCS 2016) , 2016
I Diakonikolas, D. Kane. A New Approach for Testing Properties of Discrete Distributions. In IEEE Symposium on Foundations of Computer Science (FOCS 2016) , 2016
S. Tu, R. Boczar, M. Soltanolkotabi, and B. Recht. Low-rank Solutions of Linear Matrix Equations via Procrustes Flow.. In International Conference on Machine Learning (ICML 2016), 2016
J. Acharya, I. Diakonikolas, J. Li, L. Schmidt . Fast Algorithms for Segmented Regression. In International Conference on Machine Learning (ICML 2016), 2016
R. Yu and Y. Liu. Learning from Multiway Data: Simple and Efficient Tensor Regression. In International Conference on Machine Learning (ICML 2016), 2016
Yoon Sik Cho, Emilio Ferrara, Greg Ver Steeg, and Aram Galstyan. Latent Space Models for Multimodal Social Data. In World Wide Web Conference (WWW 2016), 2016
Z. Che, D. Kale, W. Li, M. T. Bahadori, and Y. Liu. Deep Computational Phenotyping. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2015), 2015
I. Diakonikolas, M. Hardt, L. Schmidt. Differentially Private Learning of Structured Discrete Distributions. In Advances in Neural Information Processing Systems (NIPS 2015), 2015
M. Razaviyayn, F. Farnia, and D. Tse. Discrete Rènyi classifiers. In Advances in Neural Information Processing Systems (NIPS 2016), 2015
S. Gao, G. Ver Steeg, and A. Galstyan. Efficient Estimation of Mutual Information for Strongly Dependent Variables . In Artificial Intelligence and Statistics Conference (AISTATS 2015), 2015
D. Cheng, Y. Cheng, Y. Liu, R. Peng, and S.-H. Teng. Efficient Sampling for Gaussian Graphical Models via Spectral Sparsification. In Conference on Learning Theory (COLT 15), 2015
R. Yu, D. Cheng and Y. Liu. Accelerated Online Low Rank Tensor Learning for Multivariate Spatiotemporal Streams. In International Conference on Machine Learning (ICML 2015), 2015
M. T. Bahadori, D. Kale, Y. Fan, and Y. Liu. Functional Subspace Clustering with Application to Time Series. In International Conference on Machine Learning (ICML 2015), 2015
S. Chan, I. Diakonikolas, R. Servedio, X. Sun . Near-Optimal Density Estimation in Near-Linear Time Using Variable-Width Histograms. In Advances in Neural Information Processing Systems (NIPS 2014), 2014
G. Ver Steeg and A. Galstyan. Discovering Structure in High-Dimensional Data Through Correlation Explanation. In Advances in Neural Information Processing Systems (NIPS 2014), 2014
S. Chan, I. Diakonikolas, R. Servedio, X. Sun . Efficient Density Estimation via Piecewise Polynomial Approximation. In Annual ACM Symposium on Theory of Computing (STOC 2014), 2014
X. He, T. Rekatsinas, J. Foulds, L. Getoor, Y. Liu. HawkesTopic: A Joint Model for Network Inference and Topic Modeling from Text-Based Cascades. In International Conference on Machine Learning (ICML 2015),
Meng, C., Trinh, L., Xu, N., Enouen, J., Liu, Y. (2022). Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset. Scientific Reports.
Fan, Y., Demirkaya, E., Li, G. and Lv, J. (2019). RANK: large-scale inference with graphical nonlinear knockoffs. Journal of the American Statistical Association, to appear.
Qiao, X., Qian, C., James, G. and Guo, S. (2019) "Doubly Functional Graphical Models in High Dimensions", Biometrika (to appear).
James, G., Paulson, C. and Rusmevichientong, P. (2019) "Penalized and Constrained Optimization: An Application to High-Dimensional Website Advertising", Journal of the American Statistical Association (to appear). (R package available from CRAN.)
Fan, Y., Demirkaya, E. and Lv, J. (2019). Nonuniformity of p-values can occur early in diverging dimensions. Journal of Machine Learning Research 20, 1-33.
Qiao, X., Guo, S. and James, G. (2019) "Functional Graphical Models", Journal of the American Statistical Association 114, 211-222.
Uematsu, Y., Fan, Y., Chen, K., Lv, J. and Lin, W. (2019). SOFAR: large-scale association network learning. IEEE Transactions on Information Theory 65, 4924-4939.
Zheng, Z., Bahadori, M. T., Liu, Y. and Lv, J. (2019). Scalable interpretable multi-response regression via SEED. Journal of Machine Learning Research 20, 1-34.
Paulson, C., Luo, L. and James, G. (2018) "Efficient Large-Scale Internet Media Selection Optimization for Online Display Advertising", Journal of Marketing Research 55, 489-506. (There is also an online appendix and an R package to implement the method is available at CRAN. A story about this project.)
Candès, E. J., Fan, Y., Janson, L. and Lv, J. (2018). Panning for gold: 'model-X' knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society Series B 80, 551-577.
James, G. (2018) "Statistics within Business in the Era of Big Data", Statistics and Probability Letters 136, 155-159.
Derenski, J., Fan, Y. and James, G. (2017) Discussion of "Random-projection ensemble classification" by Cannings and Samworth, Journal of the Royal Statistical Society, Series B 70, 895-896.
Jason D. Lee, Dennis L Sun, Yuekai Sun, and Jonathan Taylor. Exact Post-Selection Inference with the Lasso. In Annals of Statistics, 2016
M. Hong, M. Razaviyayn, Z.-Q. Luo, J.-S. Pang. A unified algorithmic framework for block-structured optimization involving big data: With applications in machine learning and signal processing. In IEEE Signal Processing Magazine, 2016
N. Nayyar, D. Kalathil and R. Jain. On regret-optimal learning in decentralized multi-armed bandits. In IEEE Trans. on Control of Networked Systems, 2016
Kong, Y., Zheng, Z. and Lv, J.. The constrained Dantzig selector with enhanced consistency. In Journal of Machine Learning Research, 2016
Shang-Hua Teng. Foundations and Trends in Theoretical Computer Science . InScalable Algorithms for Data and Network Analysis, 2016
S. Oymak, M. Soltanolkotabi, and B. Recht. Sharp Time-data tradeoffs for linear inverse problems. In Submitted to IEEE Trans. on Information Theory, 2016
Minsker, S.. Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries.. In Submitted to the Annals of Statistics, 2016
Fan, Y. and Lv, J.. Innovated scalable efficient estimation in ultra-large Gaussian graphical models. In The Annals of Statistics, 2016
Minsker, S.. Geometric median and robust estimation in Banach spaces. In Bernoulli, 2015
D. Kalathil, V. Borkar and R. Jain,. Empirical Q-Value Iteration. In Submitted to Stochastic Systems, 2015