At the Forefront of Modern AI
Machine learning, a field at the intersection of computer science and applied mathematics, aims to design automated decision-making models for the purposes of prediction or explanation, using data or experience, and that improve over time.
In the age of big data, information is ubiquitous and no longer on a human scale. This necessitates the use and development of automated learning methods.
Deep learning
Optimization
High-dimensional statistics
Theoretical guarantees
Efficient algorithms
Confidentiality
Robustness against adversarial attacks
Equity
Online learning
Bayesian inferenc
Des champs d'applications multiples, couverts à Dauphine - PSL
Health care (medical imaging, EEG signals for brain-computer interfacing, etc.)
Robotics (including assistive robotics)
Image processing and computer vision
Natural language processing
Art and humanities
Games
And more
Our Research
Our studies on machine learning cover the full spectrum of this research field, from its theoretical and algorithmic foundations to the most advanced applications.
Conducted in our LAMSADE and CEREMADE laboratories, this research is on the following topics:
- Deep learning theory
- Optimization for machine learning
- Bayesian inference
- Parsimonious models and higher dimensions
- Compressed sensing
- Monte Carlo tree search
- Learning for games
- Online learning and game theory
- Reinforcement learning
- Large-scale unsupervised learning
- Models preserving confidentiality (e.g. differential privacy)
- Equity in machine learning
- Robustness against adversarial attacks
- Explainable and interpretable AI; causal inference and parsimonious models
- Energy-efficient deep learning
- Online learning and multi-agent learning
Our Researchers
Alexandre Allauzen, Jamal Atif, Tristan Cazenave, Yann Chevaleyre, Jerome Lang, Rida Laraki, Florian Yger, Benjamin Negrevergne, Clément Royer, Fabrice Rossi, Christian Robert, Julien Stoehlr, Marc Hoffmann, Robin Ryder, Vincent Rivoirard, Laurent Cohen, Irene Waldspurger, Emmanuel Bacry
Published Research
- Pinot R., Meunier L., Araujo A., Kashima H., Yger F., Gouy-Pailler C., Atif J. Theoretical evidence for adversarial robustness through randomization. In Advances in Neural Information Processing Systems (pp. 11838-11848) (2019)
- Bucci MA., Semeraro O., Allauzen A., Wisniewski G., Cordier L., Mathelin L. Control of chaotic systems by deep reinforcement learning. Proceedings of the Royal Society A 475 (2231), 20190351 (2019)
- Bacry E., Gaïffas S., Kabeshova A., Yu Y. ZiMM : a deep learning model for long term adverse events with non-clinical claims data. arXiv preprint arXiv:1911.05346 (2019)
- Morel M., Bacry E., Gaiffas S., Guilloux A., Leroy F. ConvSCCS : convolutional self-controlled case-series model for lagged adverser event detection. Biostatistics (2019)
- Raynal L., Marin JM., Pudlo P., Ribatet M., Robert CP., Estoup A. ABC random forests for Bayesian parameter inference. Bioinform. 35(10) : 1720-1728 (2019)
- Frazier D., Robert C., Rousseau J. Model Misspecification in ABC : Consequences and Diagnostics. Journal of the Royal Statistical Society. Series B, Statistical Methodology (2019)
- Groscot R., Cohen LD. Shape part Transfer via semantic latent space factorization 4th conference on Geometric Science of Information (GSI2019), Aug 2019, Toulouse, France
- Yamane, I., Yger, F., Atif, J., Sugiyama, M. Uplift modeling from separate labels. In Advances in Neural Information Processing Systems (pp. 9927-9937) (2018)
- Cazenave T. Residual Networks for Computer Go. IEEE Trans. Games 10(1) : 107-110 (2018)
- Labeau M., Allauzen A. Learning with Noise-Contrastive Estimation : Easing training by learning to scale. COLING 2018 : 3090-3101