Jennifer Neville Purdue University, United States | |
Title: Collective Classification in Large-Scale Networks | |
Abstract: There has been a growing interest in learning statistical models of network structure to understand key patterns and dependencies in complex systems. Relational machine learning methods focus on predicting node features based on the network relationships, with the aim of exploiting the statistical correlations between linked nodes in the network to improve predictions. For example, we can learn a model to predict the political views of users in an online social network, based on the friendship relationships among users. Such 'collective classification' methods have been applied in many domains with great empirical success, from social to information to biological networks. However, many relational machine learning algorithms were developed based on underlying assumptions that may be violated in real-world applications. Specifically, much of the initial work focused on the task of learning from a fully-labeled disjoint training graph with the expectation that the learned model would be applied to a separate, disjoint (test) graph. In practice, however, the methods are often applied within a single, large-scale, partially-observed network--which can produce varying results based on the characteristics of the data. In this tutorial, we will survey the relational models and algorithms that have been developed for collective classification. We will give links to the recent literature to guide study, and present results demonstrating the effectiveness of the techniques. In addition, we will discuss methodological choices that arise in different network settings, and outline how to overcomes statistical biases--unique to network data--that can impede learning and inference. | |
Bio for Jennifer Neville: Jennifer Neville is the Miller Family Chair Associate Professor of Computer Science and Statistics at Purdue University. She received her PhD from the University of Massachusetts Amherst in 2006. In 2012, she was awarded an NSF Career Award, in 2008 she was chosen by IEEE as one of "AI's 10 to watch", and in 2007 was selected as a member of the DARPA Computer Science Study Group. Her research focuses on developing data mining and machine learning techniques for relational domains, which include social, information, and physical networks. |
Peter Richtárik University of Edinburgh, UK | |
Title: Randomized methods for big data: from linear systems to optimization | |
Abstract: In this brief tutorial I will explain the key ideas behind a few recent advances in scalable algorithms for convex optimization,with a particular emphasis on minimizing the sum of a large number of functions. Problems of this form appear in many fieldsof science and technology, including data science, engineering, operations research, statistics and machine learning. The development will start in an unusual place: a simple yet insightful theory of randomized iterative methods for linearsystems. I will describe a single algorithm which, based on the choice of two parameters it depends on, reduces to methodssuch as the randomized Kaczmarz method, randomized coordinate descent method, randomized Netwon method andrandomized Gaussian descent. The general algorithm admits a single theoretical analysis from which the best knownbounds for the special cases mentioned above follow. With the insights gained from the first part of the tutorial, we will be ready to move on to consider the problem of minimizingsums of functions. A new generation of super-efficient randomized methods for such problems emerged in the last 3 years.These algorithms are either primal in nature, belonging to the family of stochastic gradient descent methods with avariance-reduction technique, or dual in nature, belonging to the family of randomized coordinate descent methods. I will talkabout one method belonging to each category, and point to close links between them. Finally, I will briefly describe howthese methods can be parallelized and implemented in a distributed environment, forming the engine of the emergingfield of big data optimization. | |
Bio for Peter Richtárik: |
Gunter Röth Solution Architect, NVIDIA | |
Title: NVIDIA’s platform for Deep Neural Networks (DNN) | |
Abstract: In that session, we will present the NVIDIA’s platform for Deep Neural Networks (DNN). NVIDIA designs and produces GPUs and processors that are at the origin of breakthroughs in Deep Learning. NVIDIA also develops CuDNN, a library of primitives for Deep Learning which is integrated in leading frameworks like Theano, Torch and Caffe and DIGITS, an interactive Deep Learning GPU Training System. During that session, you will learn more about those different components to help you make your DNN more powerful. | |
Bio for Gunter Röth: Gunter Röth joined NVIDIA as a Solution Architect in October last year having previously worked at Cray, HP, Sun Microsystems and most recently BULL. He has a Master in geophysics from the Institut de Physique du Globe (IPG) in Paris and has completed a PhD in seismology on the use of neural networks (artificial intelligence) for interpreting geophysical data. |