Responsible artificial intelligence is the next challenge of research to foster the deployment of autonomous systems in the real world. In this paper, we focus on safe and explainable design and deployment of autonomous agents, e.g., robots. In particular, we present our recent contributions to: i) safe and explainable planning, leveraging on safe Reinforcement Learning (RL) and neurosymbolic planning; ii) effective deployment of RL policies via model-based control; iii) formal verification of the safety of deep RL policies; and iv) explainable anomaly detection of complex real systems.
Developing safe and explainable autonomous agents: from simulation to the real world
Federico Bianchi;Alberto Castellini;Alessandro Farinelli;Luca Marzari;Daniele Meli;Francesco Trotti;Celeste Veronese
2024-01-01
Abstract
Responsible artificial intelligence is the next challenge of research to foster the deployment of autonomous systems in the real world. In this paper, we focus on safe and explainable design and deployment of autonomous agents, e.g., robots. In particular, we present our recent contributions to: i) safe and explainable planning, leveraging on safe Reinforcement Learning (RL) and neurosymbolic planning; ii) effective deployment of RL policies via model-based control; iii) formal verification of the safety of deep RL policies; and iv) explainable anomaly detection of complex real systems.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
547.pdf
accesso aperto
Tipologia:
Versione dell'editore
Licenza:
Creative commons
Dimensione
3.16 MB
Formato
Adobe PDF
|
3.16 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.