First pillar: My work is about guiding a Bayesian Optimizer with information sources of limited accuracy. These low-fidelity information sources include first-principle based models that capture the overall behavior of a complex target system. The objective is to investigate how far we can benefit from these available sources to improve the data efficiency of iterative learning controllers applied to a real-world system. Such systems include complex manufacturing setups where acquiring real measurement data is costly. My methodology aims to leverage low-fidelity digital twins as additional data sources to alleviate the cost of tuning the system.
Second pillar: I train a model-free reinforcement learning agent to learn the mismatch between a simulation and the real dynamics of a robot arm. We built a physical robot cell where a multi-degree-of-freedom arm tracks a moving object using online camera feedback. The central question is: can we learn to model the uncertainty in simulation well enough to successfully transfer the agent to the real robot—thus compensating for simulation inaccuracies?
This website is under construction.
In the meantime, please refer to my LinkedIn homepage.
Thank you for your patience.