Skip to content

NVIDIA Distinguished Lecture Series in Machine Learning

Location: SAL 101

Visitor Info

*Click on the titles to view the abstracts.

Date Speaker Title
24 Jan 2017
Ruslan Salakhutdinov (CMU/Apple)
Time: 4:00-5:00pm Abstract: In this talk, I will first introduce a broad class of unsupervised deep learning models and show that they can learn useful hierarchical representations from large volumes of high-dimensional data with applications in information retrieval, object recognition, and speech perception. I will next introduce deep models that are capable of extracting a unified representation that fuses together multiple data modalities and present the Reverse Annealed Importance Sampling Estimator (RAISE) for evaluating these deep generative models. Finally, I will discuss models that can generate natural language descriptions (captions) of images and generate images from captions using attention, as well as introduce multiplicative and fine-grained gating mechanisms with application to reading comprehension. BIO: Ruslan Salakhutdinov received his PhD in computer science from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Departments of Statistics and Computer Science. In 2016 he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan's primary interests lie in deep learning, machine learning, and large-scale optimization. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Google Faculty Award, Nvidia's Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.
05 Sep 2017
Ian Goodfellow (Google)
Abstract: Generative adversarial networks (GANs) are machine learning models that are able to imagine new data, such as images, given a set of training data. They solve difficult approximate probabilistic computations using game theory. A generator network competes to fool a discriminator network in a game whose Nash equilibrium corresponds to recovering the probability distribution that generated the training data. GANs open many possibilities for machine learning algorithms. Rather than associating input values in the training set with specific output values, GANs are able to learn to evaluate whether a particular output was one of many potential acceptable outputs or not. BIO: Ian Goodfellow (PhD in machine learning, University of Montreal, 2014) is a research scientist at Google. His research interests include most deep learning topics, especially generative models and machine learning security and privacy. He invented generative adversarial networks, was an influential early researcher studying adversarial examples, and is the lead author of the MIT Press textbook Deep Learning (www.deeplearningbook.org). He runs the Self-Organizing Conference on Machine Learning, which was founded at OpenAI in 2016.
14 Nov 2017
Robert Schapire (Microsoft Research – NYC)
Abstract: We consider how to learn through experience to make intelligent decisions. In the generic setting, called the contextual bandits problem, the learner must repeatedly decide which action to take in response to an observed context, and is then permitted to observe the received reward, but only for the chosen action. The goal is to learn to behave nearly as well as the best policy (or decision rule) in some possibly very large and rich space of candidate policies. This talk will describe progress on developing general methods for this problem and some of its variants. BIO: Robert Schapire is a Principal Researcher at Microsoft Research in New York City. He received his PhD from MIT in 1991. After a short post-doc at Harvard, he joined the technical staff at AT&T Labs (formerly AT&T Bell Laboratories) in 1991. In 2002, he became a Professor of Computer Science at Princeton University. He joined Microsoft Research in 2014. His awards include the 1991 ACM Doctoral Dissertation Award, the 2003 Gödel Prize, and the 2004 Kanelakkis Theory and Practice Award (both of the last two with Yoav Freund). He is a fellow of the AAAI, and a member of both the National Academy of Engineering and the National Academy of Sciences. His main research interest is in theoretical and applied machine learning, with particular focus on boosting, online learning, game theory, and maximum entropy.
05 Feb 2018
Harry Shum (Microsoft)

Bill Dally (NVIDIA)