(July 4, 8:30 AM - 12:00 PM)
Multi-label learning aims to build models that provide potentially multiple labels for each data point (unlike multi-class classification that provides a single class label per instance). A growing number of applications in data science and machine learning involve the multi-label setting; including image and text classification, time-series forecasting, localization and tracking, missing-value imputation, recommender systems, and many other kinds of structured-output problems. There is a broad selection multi-label learning methods, including approaches that leverage 'off-the-shelf' (classical, multi-class) models, as well as deep neural network architectures. The first part of this tutorial involves a lecture that introduces some of these methods, as well as discusses particular cases of interest such as weak/partial labels, questions of model interpretability and scalability, and the intersection with other areas of machine learning such as sequential, multi-task and transfer learning. In a second part of the tutorial, we will take a hands-on approach with Python.