TY - INPR A1 - Phan, Thomy A1 - Sommer, Felix A1 - Ritz, Fabian A1 - Altmann, Philipp A1 - Nüßlein, Jonas A1 - Kölle, Michael A1 - Belzner, Lenz A1 - Linnhoff-Popien, Claudia T1 - Emergent Cooperation from Mutual Acknowledgment Exchange in Multi-Agent Reinforcement Learning T2 - Research Square N2 - Peer incentivization (PI) is a recent approach, where all agents learn to reward or to penalize each other in a distributed fashion which often leads to emergent cooperation. Current PI mechanisms implicitly assume a flawless communication channel in order to exchange rewards. These rewards are directly integrated into the learning process without any chance to respond with feedback. Furthermore, most PI approaches rely on global information which limits scalability and applicability to real-world scenarios, where only local information is accessible. In this paper, we propose Mutual Acknowledgment Token Exchange (MATE), a PI approach defined by a two-phase communication protocol to mutually exchange acknowledgment tokens to shape individual rewards. Each agent evaluates the monotonic improvement of its individual situation in order to accept or reject acknowledgment requests from other agents. MATE is completely decentralized and only requires local communication and information. We evaluate MATE in three social dilemma domains. Our results show that MATE is able to achieve and maintain significantly higher levels of cooperation than previous PI approaches. In addition, we evaluate the robustness of MATE in more realistic scenarios, where agents can defect from the protocol and where communication failures can occur. We also evaluate the sensitivity of MATE w.r.t. the choice of token values. UR - https://doi.org/10.21203/rs.3.rs-2315844/v1 Y1 - 2022 UR - https://doi.org/10.21203/rs.3.rs-2315844/v1 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-45471 SN - 2693-5015 PB - Research Square CY - Durham ER -