Here is an incomplete list of current hot research questions in SuReLI. They do not necessarily cover the whole span of activities within the group but reflect some of our interests.
The software section of this site links to open source code developped during projects stemming from to those research interests. We generally open source our code when we believe it has a value to the community. Consequently, if some research area seems to lack open source code, there probably is something cooking internally.
- Dependable RL. Being able to quickly adapt to new tasks (life-long RL or continuing RL) or to non-stationary environments, being able to re-adapt to former tasks without too much performance loss, transferring knowledge or representations from one task to another are three desirable properties for a learning agent. We investigate these questions, with a specific (but not exclusive) interest in how they affect Deep RL.
- Convergence and learning efficiency in Deep RL. We investigate how we can design more efficient learning procedures for (Deep) RL agents.
We try to understand the real impact of recent contributions in Deep RL and assess whether they contribute equally across environments, why they sometimes underperform, and how grounded they are in theory. We also study how convergence can be improved for value-based, policy gradients or evolutionary computation approaches.
- Learning for recurrent optimization. Many optimization problems need to be solved repeatedly, with variations on their input data. For instance, classical OR problems such as Unit Commitment or Facility Location problems are often embedded in real-life situations where quick re-optimizations are needed to respond to variations in external conditions. Similarly, some continuous optimization processes (including the optimization of deep neural networks) can benefit from good guidance in their parameters. We study how such guidance can be learned from observing past resolutions and transfered to the current problem at hand.
- Evolutionary RL. We take an evolutionary perspective on Reinforcement Learning and, in a broader sense, neural network optimization. Current interests cover neural architecture search, life-long learning and neuromodulation.
- Applications of RL. We investigate the transfer of state-of-the-art RL methods to real-life applications. Among them: humanoid robotics, autonomous vehicle control (boats and UAVs) and mission planning, air traffic management, predictive maintenance, satellite systems planning, fluid flow control.