Solving complex but structured problems in a decentralized mannervia multiagent collaboration has received much attention in recentyears. This is natural, as on one hand, multiagent systems usuallypossess a structure that determines the allowable interactionsamong the agents; and on the other hand, the single most pressingneed in a cooperative multiagent system is to coordinate the localpolicies of autonomous agents with restricted capabilities to servea system-wide goal. The presence of uncertainty makes this evenmore challenging, as the agents face the additional need to learnthe unknown environment parameters while forming (and following)local policies in an online fashion. In this paper, we providethe first Bayesian reinforcement learning (BRL) approach for distributedcoordination and learning in a cooperative multiagent systemby devising two solutions to this type of problem. More specifically,we show how the Value of Perfect Information (VPI) can beused to perform efficient decentralised exploration in both modelbasedand model-free BRL, and in the latter case, provide a closedform solution for VPI, correcting a decade old result by Dearden,Friedman and Russell. To evaluate these solutions, we present experimentalresults comparing their relative merits, and demonstrateempirically that both solutions outperform an existing multiagentlearning method, representative of the state-of-the-art.

Decentralized Bayesian reinforcement learning for online agent collaboration

FARINELLI, Alessandro;
2012-01-01

Abstract

Solving complex but structured problems in a decentralized mannervia multiagent collaboration has received much attention in recentyears. This is natural, as on one hand, multiagent systems usuallypossess a structure that determines the allowable interactionsamong the agents; and on the other hand, the single most pressingneed in a cooperative multiagent system is to coordinate the localpolicies of autonomous agents with restricted capabilities to servea system-wide goal. The presence of uncertainty makes this evenmore challenging, as the agents face the additional need to learnthe unknown environment parameters while forming (and following)local policies in an online fashion. In this paper, we providethe first Bayesian reinforcement learning (BRL) approach for distributedcoordination and learning in a cooperative multiagent systemby devising two solutions to this type of problem. More specifically,we show how the Value of Perfect Information (VPI) can beused to perform efficient decentralised exploration in both modelbasedand model-free BRL, and in the latter case, provide a closedform solution for VPI, correcting a decade old result by Dearden,Friedman and Russell. To evaluate these solutions, we present experimentalresults comparing their relative merits, and demonstrateempirically that both solutions outperform an existing multiagentlearning method, representative of the state-of-the-art.
2012
0981738125
Multi-Agent Learning; Bayesian Techniques; Uncertainty
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/470150
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
social impact