Effectively Solving Problems in Artificial Intelligence
Optimization and optimal control are the focus of advanced AI algorithms, whether in accelerating calibration or estimation tools, or further upstream in training learning algorithms, including deep neural networks.
Continuous optimization
Combinatorial optimization
Non-convex optimization
Derivative-free optimization
Stochastic optimal control
Applications
This mathematics research field is central to the development of algorithms used in artificial intelligence with applications in various fields:
- Machine learning
- Deep learning
- Signal and image processing
- Computer vision
- Game theory
- Economics and finance
Our Research in the University's Labs
Our Research by Topic
Continuous Optimization
This involves building efficient algorithms to estimate parameters or calibrate neural networks. It includes resolving inverse problems, higher-order discretization methods for gradient flows in Wasserstein space, Langevin systems with mean-field interaction, and control.
Among the specific topics studied at CEREMADE, through its INRIA team Mokaplan, is optimal transport, which is strongly tied to minimization problems in Wasserstein space.
Calibration
Inverse problems
Gradient flows
Optimal transport
Optimization in Wasserstein space
Our Researchers
Irène Waldspruger (PRAIRIE chair), Gabriel Turinici (former member of the Institut Universitaire de France), Guillaume Legendre, Guillaume Carlier, Jean-David Benamou, Laurent Cohen (PRAIRIE chair), Zhenjie Ren, Clément W. Royer
Published Research
- Hu K., Kazeykina A., Ren Z. Mean-field Langevin System, Optimal Control and Deep Neural Networks. Paris, Cahier de recherche CEREMADE, Université Paris-Dauphine, 25 p. (2019)
- Waldspurger I., d’Aspremont A., Mallat S. Phase recovery, Maxcut and complex semidefinite programming. Mathematical Programming, vol. 149, n°1-2, p. 47-81. (2015)
- Legendre G., Turinici G. Second-order in time schemes for gradient flows in Wasserstein and geodesic metric spaces. Comptes rendus. Mathématique, vol. 355, n°3, p. 345-353. (2017)
- Carlier G., Duval V., Peyré G., Schmitzer B. Convergence of Entropic Schemes for Optimal Transport and Gradient Flows. SIAM Journal on Mathematical Analysis, vol. 49, n°2, p. 1385-1418. (2017)
- Benamou J-D., Carlier G., Cuturi M., Nenna L., Peyré G. Iterative Bregman Projections for Regularized Transportation Problems. SIAM Journal on Scientific Computing, vol. 37, n°2, p. A1111–A1138. (2015)
- Turinici G. Metric gradient flows with state dependent functionals : the Nash-MFG equilibrium flows and their numerical schemes. Nonlinear Analysis, vol. 165, p. 163-181. (2017)
Combinatorial Optimization
Several AI problems, be they in machine learning, social choice, or collective decision-making, occur as a problem of constrained combinatorial optimization.
Since it was created, LAMSADE has given structure to French and European communities dealing with combinatorial optimization, operations research, and decision support (it is the only European lab with two members who have run the EURO learned society).
Studies related to these topics fall under MATHIS (Mathematical Programming and Discrete Structures) projects in the focus areas of "Combinatorial and Algorithmic Optimization" and "Multicriteria Combinatorial Optimization", at the interface of the Decision Support focus area and the Combinatorial and Algorithmic Optimization area at LAMSADE.
Polyhedral approach – Network design – Graph theory – Probabilistic method –
Robustness – Positive semidefinite programming – (Box-)TDI system – Large-scale problems –
Column generation – Cutting plane algorithms – Efficient set – Pareto –
Approximation – Enumeration – Compromise
Our Researchers
Denis Cornaz, Virginie Gabrel, Ararat Harutyunyan, Ridha Mahjoub, Cécile Murat, Lucie Galand, Daniek Vanderpooten, and Aissi Hassene
Non-Convex Optimization
Many situations in learning are naturally formulated as non-convex optimization problems; this is the case with deep neural networks or problems in matrix variables (completion, factorization, dictionary learning, etc.).
In instances like this, it may be especially hard to guarantee a comprehensive resolution of the problem, and it will be crucial to ensure that the methods are not blocked in the vicinity of saddle points.
So the aim is to develop algorithms with strong second-order theoretical guarantees, expressed in terms of worst-case complexity, that are also able to function with ultra-high-dimensional data (relative to the number of variables or the size of the dataset used).
Non-convex optimization
Second-order methods
Worst-case complexity
Published Research
- C. W. Royer, M. O’Neill et S. J. Wright. A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization. Mathematical Programming, 180(1), 451-488 (2019)
- C. W. Royer et S. J. Wright. Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization. SIAM Journal on Optimization 28 (2) :1448-1477 (2018)
- E. Bergou, Y. Diouane, V. Kungurstev et C. W. Royer. A subsampling line-search method with second-order results. Rapport technique (2018)
- S. Gratton, C. W. Royer et L. N. Vicente. A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds. Mathematical Programming, 179(1-2), 195-222 (2018)
Derivative-Free Optimization
When attempting to calibrate the hyperparameters of a complex learning model or determining the best perturbation of a dataset to skew learning, one is faced with optimization problems for which the function of "objective" is highly costly to assess, often noisy, and represents the only available information for optimization.
The associated techniques, referred to as derivative-free optimization, rely on either sampling-based exploration or the construction of models simulating the function's behavior. Our studies primarily consider the use of probabilistic versions of these approaches, which cost less than deterministic methods but can still provide theoretical guarantees (almost certain convergence and high-probability complexity).
The equally theoretical and practical value of randomness makes it possible to consider the application of these high-dimensional strategies.
Derivative-free optimization
Probabilistic characteristics
Costly assessments
Published Research
- E. Bergou, Y. Diouane, V. Kungurstev et C. W. Royer. A stochastic Levenberg-Marquardt method using random models with application to data assimilation. Technical report (2018)
- S. Gratton, C. W. Royer, L. N. Vicente et Z. Zhang. Direct search based on probabilistic feasible descent for bound and linearly constrained problems. Computational Optimization and Applications, 72(3) : 525-559, 2019.S. (2019)
- Gratton, C. W. Royer, L. N. Vicente et Z. Zhang. Complexity and global rates of trust-region methods based on probablistic models. IMA Journal of Numerical Analysis, 38(3) : 1579-1597 (2018)
- S. Gratton, C. W. Royer et L. N. Vicente. A second-order globally convergent direct-search method and its worst-case complexity. Optimization : A Journal of Mathematical Programming and Operations Research, 65(6) : 1105-1128 (2016)
- S. Gratton, C. W. Royer, L. N. Vicente et Z. Zhang. Direct search based on probabilistic descent. SIAM Journal on Optimization, 25(3) : 1515-1541 (2015)
Stochastic Optimal Control
Optimal control consists of maximizing criteria linked to the trajectory of a controlled system, often in a random environment, sometimes in the presence of an opposing player.
It may include defining a strategy for the use of trading robots in the presence of uncertainty about market parameters, or defining an optimal bidding strategy faced with an advertising space provider and in the presence of multiple bidders, and so on.
It may also include controlling a system so as to maintain it in a definite space, e.g. an autonomous vehicle on a road. Then we could refer to a stochastic target problem.
This is historically a key area of study at CEREMADE, particularly based on the work of Pierre-Louis Lions (Chair at the Collège de France, 1994 Fields medalist), who simultaneously originated the theory of viscosity solutions, being well suited to control problems, and that of mean-field games wherein interactions among a high number of players can be taken into account.
Dauphine is also where the theory of stochastic targets was born, under the direction of Nizar Touzi, then an associate professor at CEREMADE.
Stochastic optimal control
Viscosity solutions
Optimization in uncertain environments
Robustness
Learning
Stochastic targets
Stochastic invariance
Mean-field games
Our Researchers
Pierre-Louis Lions (Fields medalist), Jean-Michel Lasry, Pierre Cardaliaguet, Bruno Bouchard, Daniela Tonon, Julien Claisse, and Imen Ben Tahar
Published Research
- Briani A., Cardaliaguet P. Stable solutions in potential mean field game systems, NoDEA. Nonlinear Differential Equations and Applications, vol. 25, n°1. (2018)
- Abi Jaber E., Bouchard B., Illand C. Stochastic invariance of closed sets with non-Lipschitz coefficients. Stochastic Processes and their Applications, vol. 129, n°5, p. 1726-1748. (2019)
- Graber P., Mészáros A., Silva F., Tonon D. The planning problem in mean field games as regularized mass transport. Calculus of Variations and Partial Differential Equations, vol. 58, n°3. (2019)
- Baradel N., Bouchard B., Dang N. Optimal Control Under Uncertainty and Bayesian Parameters Adjustments. SIAM Journal on Control and Optimization, vol. 56, n°2, p. 1038-1057. (2018)
- Claisse J. Optimal control of branching diffusion processes : A finite horizon problem. The Annals of Applied Probability, vol. 28, n°1, p. 1-34. (2018)
- Bouchard B., Djehiche B., Kharroubi I. Quenched mass transport of particles towards a target. JOTA, to appear (2020)
- Fernandez Tapia J., Guéant O., Lasry J-M. Optimal Real-Time Bidding Strategies. Applied Mathematics Research Express, vol. March 2017, n°1, 1, p. 142–183. (2017)