TY - CHAP A1 - Phan, Thomy A1 - Ritz, Fabian A1 - Belzner, Lenz A1 - Altmann, Philipp A1 - Gabor, Thomas A1 - Linnhoff-Popien, Claudia T1 - VAST: Value Function Factorization with Variable Agent Sub-Teams T2 - Advances in Neural Information Processing Systems 34 (NeurIPS 2021) KW - Multi-Agent Learning KW - Reinforcement Learning KW - Value Function Factorization Y1 - 2021 UR - https://proceedings.neurips.cc/paper/2021/hash/c97e7a5153badb6576d8939469f58336-Abstract.html PB - Neural Information Processing Systems Foundation, Inc. (NIPS) ER - TY - CHAP A1 - Phan, Thomy A1 - Sommer, Felix A1 - Altmann, Philipp A1 - Ritz, Fabian A1 - Belzner, Lenz A1 - Linnhoff-Popien, Claudia T1 - Emergent Cooperation from Mutual Acknowledgment Exchange T2 - AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems UR - https://dl.acm.org/doi/10.5555/3535850.3535967 KW - multi-agent learning KW - reinforcement learning KW - mutual acknowledgments KW - peer incentivization KW - emergent cooperation Y1 - 2022 UR - https://dl.acm.org/doi/10.5555/3535850.3535967 SN - 978-1-4503-9213-6 SP - 1047 EP - 1055 PB - International Foundation for Autonomous Agents and Multiagent Systems CY - Richland ER - TY - JOUR A1 - Phan, Thomy A1 - Sommer, Felix A1 - Ritz, Fabian A1 - Altmann, Philipp A1 - Nüßlein, Jonas A1 - Kölle, Michael A1 - Belzner, Lenz A1 - Linnhoff-Popien, Claudia T1 - Emergent cooperation from mutual acknowledgment exchange in multi-agent reinforcement learning JF - Autonomous Agents and Multi-Agent Systems N2 - Peer incentivization (PI) is a recent approach where all agents learn to reward or penalize each other in a distributed fashion, which often leads to emergent cooperation. Current PI mechanisms implicitly assume a flawless communication channel in order to exchange rewards. These rewards are directly incorporated into the learning process without any chance to respond with feedback. Furthermore, most PI approaches rely on global information, which limits scalability and applicability to real-world scenarios where only local information is accessible. In this paper, we propose Mutual Acknowledgment Token Exchange (MATE), a PI approach defined by a two-phase communication protocol to exchange acknowledgment tokens as incentives to shape individual rewards mutually. All agents condition their token transmissions on the locally estimated quality of their own situations based on environmental rewards and received tokens. MATE is completely decentralized and only requires local communication and information. We evaluate MATE in three social dilemma domains. Our results show that MATE is able to achieve and maintain significantly higher levels of cooperation than previous PI approaches. In addition, we evaluate the robustness of MATE in more realistic scenarios, where agents can deviate from the protocol and communication failures can occur. We also evaluate the sensitivity of MATE w.r.t. the choice of token values. UR - https://doi.org/10.1007/s10458-024-09666-5 Y1 - 2024 UR - https://doi.org/10.1007/s10458-024-09666-5 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-49287 SN - 1573-7454 SN - 1387-2532 VL - 38 IS - 2 PB - Springer CY - Dordrecht ER - TY - INPR A1 - Phan, Thomy A1 - Sommer, Felix A1 - Ritz, Fabian A1 - Altmann, Philipp A1 - Nüßlein, Jonas A1 - Kölle, Michael A1 - Belzner, Lenz A1 - Linnhoff-Popien, Claudia T1 - Emergent Cooperation from Mutual Acknowledgment Exchange in Multi-Agent Reinforcement Learning T2 - Research Square N2 - Peer incentivization (PI) is a recent approach, where all agents learn to reward or to penalize each other in a distributed fashion which often leads to emergent cooperation. Current PI mechanisms implicitly assume a flawless communication channel in order to exchange rewards. These rewards are directly integrated into the learning process without any chance to respond with feedback. Furthermore, most PI approaches rely on global information which limits scalability and applicability to real-world scenarios, where only local information is accessible. In this paper, we propose Mutual Acknowledgment Token Exchange (MATE), a PI approach defined by a two-phase communication protocol to mutually exchange acknowledgment tokens to shape individual rewards. Each agent evaluates the monotonic improvement of its individual situation in order to accept or reject acknowledgment requests from other agents. MATE is completely decentralized and only requires local communication and information. We evaluate MATE in three social dilemma domains. Our results show that MATE is able to achieve and maintain significantly higher levels of cooperation than previous PI approaches. In addition, we evaluate the robustness of MATE in more realistic scenarios, where agents can defect from the protocol and where communication failures can occur. We also evaluate the sensitivity of MATE w.r.t. the choice of token values. UR - https://doi.org/10.21203/rs.3.rs-2315844/v1 Y1 - 2022 UR - https://doi.org/10.21203/rs.3.rs-2315844/v1 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-45471 SN - 2693-5015 PB - Research Square CY - Durham ER -