Syllabus
Cours introductifs obligatoires
- A review of probability theory foundations
A review of probability theory foundations
Lecturer :
Total hours : 15
Overview :
- Random variables, expectations, laws, independence
- Inequalities and limit theorems, uniform integrability
- Conditioning, Gaussian random vectors
- Bounded variation and Lebegue-Stieltjes integral
- Stochastic processes, stopping times, martingales
- Brownian motion: martingales, trajectories, construction
- Wiener stochastic integral and Cameron-Martin formula.
UE fondamentales 3
- Bayesian statistics
Bayesian statistics
- Data Science Lab
Data Science Lab
Ects : 4
Lecturer :
Total hours : 24
Overview :
Students enrolled in this class will form groups and choose one topic among a list of proposed topics in the core areas of the master such as supervised or unsupervised learning, recommendation, game AI, distributed or parallel data-science, etc. The topics will generally consist in applying a well-established technique on a novel data-science challenge or in applying recent research results on a classical data-science challenge. Either way, each topic will come with its own novel scientific challenge to address. At the end of the module, the students will give an oral presentation to demonstrate their methodology and their findings. Strong scientific rigor as well as very good engineering and communication skills will be necessary to complete this module successfully.
Learning outcomes :
The goal of this module is to provide students with a hands-on experience on a novel data-science/AI challenge using state-of-the-art tools and techniques discussed during other classes of this master.
- Foundations of Machine Learning
Foundations of Machine Learning
Ects : 4
Lecturer :
- FRANCIS BACH
Total hours : 24
Overview :
The course will introduce the theoretical foundations of machine learning, review the most successful algorithms with their theoretical guarantees, and discuss their application in real world problems. The covered topics are:
-
Part 1: Supervised Learning Theory: the batch setting
- Intro
- Surrogate Losses
- Uniform Convergence and PAC Learning
- Empirical Risk Minimization and ill-posed problems
- Concentration Inequalities
- Universal consistency, PAC Learnability
- VC dimension
- Rademacher complexity
- Non Uniform Learning and Model Selection
- biais-variance tradeoff
- Structural Minimization Principle and Minimum Description Length Principle
- Regularization
-
Part 2: Supervised Learning Theory and Algorithms in the Online Setting
- Foundations of Online Learning
- Beyond the Perceptron algorithm
-
Partie 3: Ensemble Methods and Kernels Methods
- SVMs, Kernels
- Kernel approximation algorithms in the primal
- Ensemble methods: bagging, boosting, gradient boosting, random forests
-
Partie 4: Algorithms for Unsupervised Learning
- Dimensionality reduction: PCA, ICA, Kernel PCA, ISOMAP, LLE
- Representation Learning
- Expectation Maximization, Latent models and Variational methods
Recommended prerequisites :
- Linear models
Require prerequisites :
- Linear Algebra - Statistics and Probability
Learning outcomes :
The aim of this course is to provide the students with the fundamental concepts and tools for developing and analyzing machine learning algorithms.
Assessment :
- Each student will have to have the role of scribe during one lecture, taking notes during the class and sending the notes to the teacher in pdf. - Final exam
Bibliography-recommended reading
The most important book: - Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding machine learning: From theory to algorithms. Cambridge university press. Also: - Mohri, M., Rostamizadeh, A., & Talwalkar, A. (2012). Foundations of machine learning. MIT press. - Vapnik, V. (2013). The nature of statistical learning theory. Springer science & business media. - Bishop Ch. (2006). Pattern recognition and machine learning. Springer - Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning (Vol. 1, No. 10). New York, NY, USA:: Springer series in statistics. - James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112). New York: springer.
- High-dimensional statistics
High-dimensional statistics
- Optimal transport
Optimal transport
Ects : 4
Lecturer :
- GABRIEL PEYRE
Total hours : 24
Overview :
Optimal transport (OT) is a fundamental mathematical theory at the interface between optimization, partial differential equations and probability. It has recently emerged as an important tool to tackle a surprisingly large range of problems in data sciences, such as shape registration in medical imaging, structured prediction problems in supervised learning and training deep generative networks. This course will interleave the description of the mathematical theory with the recent developments of scalable numerical solvers. This will highlight the importance of recent advances in regularized approaches for OT which allow one to tackle high dimensional learning problems.
The course will feature numerical sessions using Python.
- Motivations, basics of probabilistic modeling and matching problems.
- Monge problem, 1D case, Gaussian distributions.
- Kantorovitch formulation, linear programming, metric properties.
- Shrödinger problem, Sinkhorn algorithm.
- Duality and c-transforms, Brenier’s theory, W1, generative modeling.
- Semi-discrete OT, quantization, Sinkhorn dual and divergences
- Optimization for Machine Learning
Optimization for Machine Learning
Ects : 4
Lecturer :
Total hours : 24
Overview :
Optimization has long been a fundamental component for modeling and solving classical machine learning problems such as linear regression and SVM classification. It also plays a key role in the training of neural networks, thanks to the development of efficient numerical tools tailored to deep learning. This course is concerned with developing optimization algorithms for learning tasks, and will consist of both lectures and hands-on sessions in Python. The course will begin by an introduction to the various problem formulations arising in machine and deep learning, together with a refresher on key mathematical concepts (linear algebra, convexity, smoothness). The course will then describe the main algorithms for optimization in data science (gradient descent, stochastic gradient) and their theoretical properties. Finally, the course will focus on the challenges posed by implementing these methods in a deep learning and large-scale environment (automatic differentiation, distributed calculations, regularization).
Learning outcomes :
- Understand the nature and structure of optimization problems arising in machine learning.
- Select an algorithm tailored to solving a particular instance among those seen in class based on theoretical and practical concerns.
- Experience the practical challenges in implementing an optimization scheme in a learning setting.
Bibliography-recommended reading
- L. Bottou, F. E. Curtis and J. Nocedal. Optimization Methods for Large-Scale Machine Learning. SIAM Review, 2018.
- S. J. Wright and B. Recht. Optimization for Data Analysis. Cambridge University Press, 2022.
- Reinforcement learning
Reinforcement learning
Ects : 4
Lecturer :
- OLIVIER CAPPE
Total hours : 24
Overview :
- Models: Markov decision processes (MDP), multiarmed bandits and other models
- Planning: finite and infinite horizon problems, the value function, Bellman equations, dynamic programming, value and policy iteration
- Basic learning tools: Monte Carlo methods, temporal-difference learning, policy gradient
- Probabilistic and statistical tools for RL: Bayesian approach, relative entropy and hypothesis testing, concentration inequalities
- Optimal exploration in multiarmed bandits: the explore vs exploit tradeoff, lower bounds, the UCB algorithm, Thompson sampling
- Extensions: Contextual bandits, optimal exploration for MDP
Learning outcomes :
Reinforcement Learning (RL) refers to scenarios where the learning algorithm operates in closed-loop, simultaneously using past data to adjust its decisions and taking actions that will influence future observations. Algorithms based on RL concepts are now commonly used in programmatic marketing on the web, robotics or in computer game playing. All models for RL share a common concern that in order to attain one's long-term optimality goals, it is necessary to reach a proper balance between exploration (discovery of yet uncertain behaviors) and exploitation (focusing on the actions that have produced the most relevant results so far).
The methods used in RL draw ideas from control, statistics and machine learning. This introductory course will provide the main methodological building blocks of RL, focussing on probabilistic methods in the case where both the set of possible actions and the state space of the system are finite. Some basic notions in probability theory are required to follow the course. The course will imply some work on simple implementations of the algorithms, assuming familiarity with Python.
Assessment :
- Individual homework (in Python)
- Final exam
Bibliography-recommended reading
Bibliographie, lectures recommandées
- M. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 1994.
- R. Sutton and A. Barto. Introduction to Reinforcement Learning. MIT Press, 1998.
- C. Szepesvari. Algorithms for Reinforcement Learning. Morgan & Claypool Publishers, 2010.
- T. Lattimore and C. Szepesvari. Bandit Algorithms. Cambridge University Press. 2019.
UE optionnelles (5 UE à choisir)
- Advanced machine learning
Advanced machine learning
Ects : 4
Lecturer :
Total hours : 24
Overview :
This research-oriented module will focus on advanced machine learning algorithms, in particular in the Bayesian setting 1) Bayesian Machine Learning (with Moez Draief, chief data scientist CapGemini) - Bayesian linear regression - Gaussian Processes (i.e. kernelized Bayesian linear regression) - Approximate Bayesian Inference - Latent Dirichlet Allocation 2) Bayesian Deep Learning (with Julyan Arbel, CR INRIA) - MCMC methods - variationnal methods 3) Advanced Recommandation Techniques (with Clement Calauzene, Criteo)
Learning outcomes :
Probabilistic, Bayesian ML and recommandation systems
Assessment :
- Chaque étudiant devra présenter un papiers de recherche
- Bayesian case studies
Bayesian case studies
- Bayesian machine learning
Bayesian machine learning
Ects : 4
Lecturer :
Total hours : 24
Overview :
Bayesian Nonparametrics:
- Introduction
- The Dirichlet Process
- Infinite Mixture models
- Posterior Sampling
- Models beyond the Dirichlet Process
- Gaussian Processes
- Selected applications
Bayesian Deep Learning
- Why do we want parameter uncertainty
- Priors for Bayesian neural networks
- Posterior inference
- Martingale Posteriors and generalised Bayesian Inference
Require prerequisites :
- Bayesian statistics
- Markov Chain Monte Carlo
Learning outcomes :
Essentials of Bayesian Nonparametrics, main concepts for Bayesian Deep Learning
Assessment :
Final exam and homework
Bibliography-recommended reading
- Hjort NL, Holmes C, Müller P, Walker SG, editors. Bayesian nonparametrics. Cambridge University Press; 2010 Apr 12.
- Ghosal S, Van der Vaart AW. Fundamentals of nonparametric Bayesian inference. Cambridge University Press; 2017 Jun 26.
- Williams CK, Rasmussen CE. Gaussian processes for machine learning. Cambridge, MA: MIT press; 2006.
- Many references at www.gatsby.ucl.ac.uk/~porbanz/npb-tutorial.html
- Murphy KP. Probabilistic machine learning: Advanced topics. MIT press; 2023 Aug 15.
- Fong E, Holmes C, Walker SG. Martingale posterior distributions. Journal of the Royal Statistical Society Series B: Statistical Methodology. 2023 Nov;85(5):1357-91.
- Computational social choice
Computational social choice
Ects : 4
Lecturer :
Total hours : 24
Overview :
The aim of this course is to give an overview of the problems, techniques and applications of computational social choice, a multidisciplinary topic at the crossing point of computer science (especially artificial intelligence, operations research, theoretical computer science, multi-agent systems, computational logic, web science) and economics. The course consists of the analysis of problems arising from the aggregation of preferences of a group of agents from a computational perspective. On the one hand, it is concerned with the application of techniques developed in computer science, such as complexity analysis or algorithm design, to the study of social choice mechanisms, such as voting procedures or fair division algorithms. On the other hand, computational social choice is concerned with importing concepts from social choice theory into computing. The course will focus on normative aspects, computational aspects, and real-world applications (including some case studies). Program: 1. Introduction to social choice and computational social choice. 2. Preference aggregation, Arrow's theorem and how to escape it. 3. Voting rules: informational basis and normative aspects. 4. Voting rules : computation. Voting on combinatorial domains. 5. Strategic issues: strategyproofness, Gibbard and Satterthwaite's theorem, computational resistance to manipulation, other forms of strategic behaviour. 6. Multiwinner elections. Public decision making and participatory budgeting. 7. Communication issues in voting: voting with incomplete preferences, elicitation protocols, communication complexity, low-communication social choice. 8. Fair division. 9. Matching under preferences. 10. Specific applications and case studies (varying every year): rent division, kidney exchange, school assignment, group recommendation systems …
Recommended prerequisites :
Prerequisite-free. Basics of discrete mathematics (especially graph theory) and algorithmics is a plus.
Require prerequisites :
none
Learning outcomes :
N/S
Assessment :
Written exam by default.
Bibliography-recommended reading
References: * Handbook of Computational Social Choice (F. Brandt, V. Conitzer, U. Endriss, J. Lang, A. Procaccia, eds.), Cambridge University Press, 2016. Available for free online. * Trends in Computational Social Choice (U. Endriss, ed), 2017. Available for free online.
- Computational statistics methods and MCMC
Computational statistics methods and MCMC
- Dimension reduction and manifold learning
Dimension reduction and manifold learning
Ects : 4
Total hours : 24
Overview :
Modern machine learning typically deals with high-dimensional data. The fields concerned are very varied and include genomics, image, text, time series, or even socioeconomic data where more and more unstructured features are routinely collected. As a counterpart of this tendency towards exhaustiveness, understanding these data raises challenges in terms of computational resources and human understandability. Manifold Learning refers to a family of methods aiming at reducing the dimension of data while preserving certain of its geometric and structural characteristics. It is widely used in machine learning and experimental science to compress, visualize and interpret high-dimensional data. This course will provide a global overview of the methodology of the field, while focusing on the mathematical aspects underlying the techniques used in practice.
Require prerequisites :
Linear algebra, basic probability theory, statistics, Python coding
Learning outcomes :
- Curse of dimensionality, manifold hypothesis and intrinsic dimension(s) - Multidimensional scaling - Linear dimension reduction (random projections, principal component analysis) - Non-linear spectral methods (kernel PCA, ISOMAP, MVU, Laplacian eigenmaps) - Ad-hoc distance-preserving methods (diffusion maps, LLE) - Probabilistic dimension reduction and clustering (SNE, UMAP) - Neural network-based dimensionality reduction
Bibliography-recommended reading
- Ghojogh, B., M. Crowley, F. Karray, and A. Ghodsi (2023). Elements of dimensionality reduction and manifold learning - Lee, J. A., M. Verleysen, et al. (2007). Nonlinear dimensionality reduction
- Data acquisition, extraction and storage
Data acquisition, extraction and storage
Ects : 4
Lecturer :
Total hours : 24
Overview :
The objective of this course is to present the principles and techniques used to acquire, extract, integrate, clean, preprocess, store, and query datasets, that may then be used as input data to train various artificial intelligence models. The course will consist on a mix of lectures and practical sessions. We will cover the following aspects:
- Web data acquisition (Web crawling, Web APIs, open data, legal issues)
- Information extraction from semi-structured data
- Data cleaning and data deduplication
- Data formats and data models
- Storing and processing data in databases, in main memory, or in plain files
- Introduction to large-scale data processing with MapReduce and Spark
- Ontology-based data access
Require prerequisites :
Basics of computer science and computer engineering (algorithms, databases, programming, logics, complexity).
Learning outcomes :
Understanding:
- how to acquire data from a variety of sources and in a variety of formats
- how to extract structured data from unstructured or semi-structured data
- how to format, integrate, clean data sets
- how to store and access data sets
Assessment :
Project (50% of the grade) and in-class written assessment (50% of the grade)
Learn more about the course :
- Deep learning for image analysis
Deep learning for image analysis
Ects : 4
Lecturer :
- Etienne DECENCIERE
Total hours : 24
Overview :
Deep learning has achieved formidable results in the image analysis field in recent years, in many cases exceeding human performance. This success opens paths for new applications, entrepreneurship and research, while making the field very competitive.
This course aims at providing the students with the theoretical and practical basis for understanding and using deep learning for image analysis applications.
Program to be followed The course will be composed of lectures and practical sessions. Moreover, experts from industry will present practical applications of deep learning. Lectures will include:
- Artificial neural networks, back-propagation algorithm - Convolutional neural networks - Design and optimization of a neural architecture - Analysis of neural network function - Image classification and segmentation - Auto-encoders and generative networks - Transformers - Current research trends and perspectives
During the practical sessions, the students will code in Python, using Keras or Pytorch. They will be confronted with the practical problems linked to deep learning: architecture design; optimization schemes and hyper-parameter selection; analysis of results.
Require prerequisites :
- Linear algebra, basic probability and statistics
- Python
Learning outcomes :
Deep learning for image analysis: theoretical foundations and applications
Assessment :
Practical session and exam
- Graph analytics
Graph analytics
Ects : 4
Lecturer :
Total hours : 24
Overview :
The objective of this course course is to give students an overview of the field of graph analytics. Since graphs form a complex and expressive data type, we need methods for representing graphs in databases, manipulating, querying, analyzing and mining them. Moreover, graph applications are very diverse and need specific algorithms. The course presents new ways to model, store, retrieve, mine and analyze graph-structured data and some examples of applications. Lab sessions are included allowing students to practice graph analytics: modeling a problem into a graph database and performing analytical tasks over the graph in a scalable manner.
Program • Graph analytics – Networks properties and models – Link Analysis : PageRank and its variants – Community detection • Frameworks for parallel graph analytics – Pregel – a model for parallel-graph computing – GraphX Spark – unifying graph- and data – parallel computing • Machine learning with graphs • Applications : process mining and analysis Practical work : graph analytics with GraphX and Neo4J
Learning outcomes :
Modeling a problem into a graph model and performing analytical tasks over the graph in a scalable manner.
Bibliography-recommended reading
References
Ian Robinson, Jim Weber, Emil Eifrem, Graph Databases, Editeur : O'Reilly (4 juin 2013), ISBN-10: 1449356265
Eric Redmond, Jim R. Wilson, Seven Databases in Seven Weeks - A Guide to Modern Databases and the NoSQL Movement, Publisher: Pragmatic Bookshelf
Grzegorz Malewicz, Matthew H. Austern, Aart J.C Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. 2010. Pregel: a system for large-scale graph processing, SIGMOD '10, ACM, New York, NY, USA, 135-146
Xin, Reynold & Crankshaw, Daniel & Dave, Ankur & Gonzalez, Joseph & J. Franklin, Michael & Stoica, Ion. (2014). GraphX: Unifying Data-Parallel and Graph-Parallel Analytics.
Michael S. Malak and Robin East, Spark GraphX in Action, Manning, June 2016
- Incremental learning, game theory and applications
Incremental learning, game theory and applications
Ects : 4
Lecturer :
Total hours : 24
Overview :
This course will focus on the behavior of learning algorithms when several agents are competing against one another: specifically, what happens when an agent that follows an online learning algorithm interacts with another agent doing the same? The natural language to frame such questions is that of game theory, and the course will begin with a short introduction to the topic, such as normal form games (in particular zero-sum, potential, and stable games), solution concepts (such as dominated/rationalizable strategies, Nash, correlated and coarse equilibrium notions, ESS), and some extensions (Blackwell approachability). Subsequently, we will examine the long-term behavior of a wide variety of online learning algorithms (fictitious play, regret-matching, multiplicative/exponential weights, mirror descent and its variants, etc.), and we will discuss applications to generative adversarial networks (GANs), traffic routing, prediction, and online auctions. [1] Nicolò Cesa-Bianchi and Gábor Lugosi, Prediction, learning, and games, Cambridge University Press, 2006. [2] Drew Fudenberg and David K. Levine, The theory of learning in games, Economic learning and social evolution, vol. 2, MIT Press, Cambridge, MA, 1998. [3] Sergiu Hart and Andreu Mas-Colell, Simple adaptive strategies: from regret matching to uncoupled dynamics, World Scientific Series in Economic Theory - Volume 4, World Scientific Publishing, 2013. [4] Vianney Perchet, Approachability, regret and calibration: implications and equivalences, Journal of Dynamics and Games 1 (2014), no. 2, 181 – 254. [5] Shai Shalev-Shwartz, Online learning and online convex optimization, Foundations and Trends in Machine Learning 4 (2011), no. 2, 107 – 194.
Learning outcomes :
Learning procedures when several agents are playing against one-other
- Introduction to causal inference
Introduction to causal inference
Ects : 4
Lecturer :
Total hours : 24
Overview :
This course provides an introduction to causal inference. It covers both the Neyman–Rubin potential outcomes framework and Pearl’s do-calculus. The former is used to introduce the fundamental problem of causal inference and the notion of counterfactuals. The core hypotheses needed for causal identification of average treatment effects are presented: (conditional) exchangeability, positivity, and consistency. Estimation based on generalised linear models and on machine learning approaches is explored, including the double-machine learning approach.
The second part of the course covers Pearl’s do-calculus. The course introduces graphical models, with a focus on directed models, followed by structural causal models. The simple Markovian case is used to link this framework to the potential outcomes one and to derive classical techniques such as the back-door criterion. The semi-Markovian case is then explored as the general way of representing causal hypotheses in the presence of unobserved confounding variables. Identification is revisited in the light of the do-calculus and of the IDC algorithm.
The final part of the course reviews causal discovery algorithms and open research questions.
Learning outcomes :
This course is an introduction to causal inference with a strong emphasis on the use of graphical models. After the course, the students should be able
- to apply consistent average treatment effect estimation procedures
- to turn causal hypotheses into structural causal models
- to analyse graphical models to determine independence structures
- to use do-calculus and the IDC algorithm to identify causal estimands
- Knowledge graphs, description logics, reasoning on data
Knowledge graphs, description logics, reasoning on data
Ects : 4
Lecturer :
- Michael THOMAZZO
Total hours : 24
Overview :
Introduction to Knowledge Graphs, Description Logics and Reasoning on Data. Knowledge graphs are a flexible tool to represent knowledge about the real world. After presenting some of the existing knowledge graphs (such as DBPedia, Wikidata or Yago) , we focus on their interaction with semantics, which is formalized through the use of so-called ontologies. We then present some central logical formalism used to express ontologies, such as Description Logics and Existential Rules. A large part of the course will be devoted to study the associated reasoning tasks, with a particular focus on querying a knowledge graph through an ontology. Both theoretical aspects (such as the tradeoff between the expressivity of the ontology language versus the complexity of the reasoning tasks) and practical ones (efficient algorithms) will be considered.
Program: 1. Knowledge Graphs (history and uses) 2. Ontology Languages (Description Logics, Existential Rules) 3. Reasoning Tasks (Consistency, classification, Ontological Query Answering) 4. Ontological Query Answering (Forward and backward chaining, Decidability and complexity, Algorithms, Advanced Topics)
References: -- The description logic handbook: theory, implementation, and applications. Baader et al., Cambridge University Press -- Foundations of Semantic Web Technologies, Hitzler et al., Chapman&Hall/CRC -- Web Data Management, Abiteboul et al., Cambridge University Press Prerequisites: -- first-order logic; -- complexity (Turing machines, classical complexity classes) is a plus.
- Large language models
Large language models
Ects : 4
Lecturer :
Total hours : 24
Overview :
The course focuses on modern and statistical approaches to NLP.
Natural language processing (NLP) is today present in some many applications because people communicate most everything in language : post on social media, web search, advertisement, emails and SMS, customer service exchange, language translation, etc. While NLP heavily relies on machine learning approaches and the use of large corpora, the peculiarities and diversity of language data imply dedicated models to efficiently process linguistic information and the underlying computational properties of natural languages.
Moreover, NLP is a fast evolving domain, in which cutting-edge research can nowadays be introduced in large scale applications in a couple of years.
The course focuses on modern and statistical approaches to NLP: using large corpora, statistical models for acquisition, disambiguation, parsing, understanding and translation. An important part will be dedicated to deep-learning models for NLP.
- Introduction to NLP, the main tasks, issues and peculiarities - Sequence tagging: models and applications - Computational Semantics - Syntax and Parsing - Deep Learning for NLP: introduction and basics - Deep Learning for NLP: advanced architectures - Deep Learning for NLP: Machine translation, a case study
Recommended prerequisites :
pytorch
Learning outcomes :
- Skills in Natural Language Processing using deep-learning
- Understand new architectures
Bibliography-recommended reading
References - Costa-jussà, M. R., Allauzen, A., Barrault, L., Cho, K., & Schwenk, H. (2017). Introduction to the special issue on deep learning approaches for machine translation. Computer Speech & Language, 46, 367-373. - Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd ed. draft): web.stanford.edu/~jurafsky/slp3/ - Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing: u.cs.biu.ac.il/~yogo/nnlp.pdf - Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning: www.deeplearningbook.org
- LLM for code and proof
LLM for code and proof
Ects : 4
Total hours : 24
- Machine learning on Big Data
Machine learning on Big Data
Ects : 4
Lecturer :
Total hours : 24
Overview :
This course focuses on the typical, fundamental aspects that need to be dealt with in the design of machine learning algorithms that can be executed in a distributed fashion, typically on Hadoop clusters, in order to deal with big data sets, by taking into account scalability and robustness. Nowadays there is an ever increasing demand of machine learning algorithms that scales over massives data sets. In this context, this course focuses on the typical, fundamental aspects that need to be dealt with in the design of machine learning algorithms that can be executed in a distributed fashion, typically on Hadoop clusters, in order to deal with big data sets, by taking into account scalability and robustness. So the course will first focus on a bunch of main-stream, sequential machine learning algorithms, by taking then into account the following crucial and complex aspects. The first one is the re-design of algorithms by relying on programming paradigms for distribution and parallelism based on map-reduce (e.g., Spark, Flink, … .). The second aspect is experimental analysis of the map-reduce based implementation of designed algorithms in order to test their scalability and precision. The third aspect concerns the study and application of optimisation techniques in order to overcome lack of scalability and to improve execution time of designed algorithm.
The attention will be on machine learning technique for dimension reduction, clustering and classification, whose underlying implementation techniques are transversal and find application in a wide range of several other machine learning algorithms. For some of the studied algorithms, the course will present techniques for a from-scratch map-reduce implementation, while for other algorithms packages like Spark ML will be used and end-to-end pipelines will be designed. In both cases algorithms will be analysed and optimised on real life data sets, by relaying on a local Hadoop cluster, as well as on a cluster on the Amazon WS cloud.
References:
- Mining of Massive Datasets www.mmds.org
- High Performance Spark - Best Practices for Scaling and Optimizing Apache Spark Holden Karau, Rachel Warren O'Reilly
- Machine learning with kernel method
Machine learning with kernel method
Ects : 4
Total hours : 24
Overview :
Reproducing kernel Hilbert spaces et le “ kernel trick ” Théorème de représentation Kernel PCA Kernel ridge regression Support vector machines Noyaux sur les semigroupes Noyaux pour le texte, les graphes, etc.
Learning outcomes :
Présenter les bases théoriques et des applications des méthodes à noyaux en apprentissage.
- Mathematics of deep learning
Mathematics of deep learning
- Monte-Carlo search and games
Monte-Carlo search and games
Ects : 4
Lecturer :
Total hours : 24
Overview :
Introduction to Monte Carlo for computer games. Monte Carlo Search has revolutionized computer games. It works well with Deep Learning so as to create systems that have superhuman performances in games such as Go, Chess, Hex or Shogi. It is also appropriate to address difficult optimization problems. In this course we will present different Monte Carlo search algorithms such as UCT, GRAVE, Nested Monte Carlo and Playout Policy Adaptation. We will also see how to combine Monte Carlo Search and Deep Learning. The validation of the course is a project involving a game or an optimization problem. La recherche Monte-Carlo a révolutionné la programmation des jeux. Elle se combine bien avec le Deep Learning pour créer des systèmes qui jouent mieux que les meilleurs joueurs humains à des jeux comme le Go, les Echecs, le Hex ou le Shogi. Elle permet aussi d ’ approcher des problèmes d ’ optimisation difficiles. Dans ce cours nous traiterons des différents algorithmes de recherche Monte-Carlo comme UCT, GRAVE ou le Monte-Carlo imbriqué et l ’ apprentissage de politique de playouts. Nous verrons aussi comment combiner recherche Monte-Carlo et apprentissage profond. Le cours sera validé par un projet portant sur un jeu ou un problème d ’ optimisation difficile.
Bibliography-recommended reading
Bibliographie : Intelligence Artificielle Une Approche Ludique, Tristan Cazenave, Editions Ellipses, 2011.
- Non-convex inverse problems
Non-convex inverse problems
Ects : 4
Lecturer :
Total hours : 24
Overview :
An inverse problem is a problem where the goal is to recover an unknown object (typically a vector with real coordinates, or a matrix), given a few ``measurements'' of this object, and possibly some information on its structure. In this course, we will discuss examples of such problems, motivated by applications as diverse as medical imaging, optics and machine learning. We will especially focus on the questions: which algorithms can we use to numerically solve these problems? When and how can we prove that the solutions returned by the algorithms are correct? These questions are relatively well understood for convex inverse problems, but the course will be on non-convex inverse problems, whose study is much more recent, and a very active research topic.
The course will be at the interface between real analysis, statistics and optimization. It will include theoretical and programming exercises.
Learning outcomes :
Understand what is a non-convex inverse problems; get some familiarity with the most classical algorithms to solve them, and with algorithms for general non-convex optimization
- NoSQL databases
NoSQL databases
Ects : 4
Lecturer :
- PAUL BONIOL
Total hours : 24
- Point cloud and 3D modelling
Point cloud and 3D modelling
Ects : 4
Lecturer :
- FRANCOIS GOULETTE
Total hours : 24
- Topics in trustworthy machine learning
Topics in trustworthy machine learning
Ects : 4
Lecturer :
- OLIVIER CAPPE
Total hours : 24
PSL Week - 2 ECTS
Bloc stage - 10 ECTS
- Stage
Stage
Ects : 10
Overview :
4 à 6 mois de stage
Academic Training Year 2025 - 2026 - subject to modification
Teaching Modalities
Courses are held at 16 bis rue de l'Estrapade, 75005 Paris.
Detailed assessment methods are communicated at the beginning of the year.
The IASD Master’s program begins with a core semester devoted to the fundamental disciplines of AI and data science, consitsing of four common courses and three courses specific to the two tracks, Computer Science and Mathematics. At the end of the first semester, students choose six additional courses for the second semester, including the opportunity to follow an intensive PSL week, allowing them to open up to other disciplines or applications. The year continues with an internship in an academic or industrial research laboratory, ending in September with the writing of the master thesis and its public defense.
The IASD Master’s degree consists of a common core semester on the fundamental disciplines of AI (from September to December; 7 mandatory courses, equivalent to 168 hours – 28 ECTS) followed by a semester of options (from January to March; 6 optional courses, equivalent to 140 hours – 22 ECTS) and an internship (from April to September; 10 ECTS) done in an academic research lab or an R&D company. The common core semester includes seven mandatory courses, while the second semester allows students to deepen their knowledge in six subjects chosen from twenty options. Students also have the opportunity to attend an intensive PSL week proposed by the DATA program at Université PSL. Optional refresher courses on probability and programming foundations are offered before the start of the common core courses in early September.
The Computer Science and Mathematics tracks share four common courses in the first semester, while three other courses are specific to each track. Courses specific to the other track may also be followed as option(s), in the limit of two options (at most) to be followed during th e first semester.
Internships and Supervised Projects
The students must complete a 5-month internship, starting in April.
For students : how to find an internship, and obtain the agreement ?
To find an internship, you can consult the list of internship offers, or approach the laboratories or companies that interest you yourself. Next, you'll need to obtain pedagogical approval for your internship subject. To obtain it, upload your subject here. Specify in the comments that the subject is for you. (Please do not send your subject by e-mail). Once you have obtained pedagogical validation, you can fill in the form in the ESUPstage application to obtain your internship agreement. For more information on the presentation of internships.
For supervisors : how to offer an internship to Master IASD - Computer science track Master's students ? If you are part of a research laboratory or R&D department, you can propose an internship topic to students by clicking here. Of course, the internship must be related to one of the subjects covered in the Master's program. The internship will appear in the list below once it has been validated by the teaching staff.
Gratification : in France, internships lasting more than 2 months must be gratified. Use the tool for calculating gratification for interns.
Research-driven Programs
Training courses are developed in close collaboration with Dauphine's world-class research programs, which ensure high standards and innovation.
Research is organized around 6 disciplines all centered on the sciences of organizations and decision making.
Learn more about research at Dauphine