• Treffer 39 von 456
Zurück zur Trefferliste

Multi-Agent Neural Rewriter for Vehicle Routing with Limited Disclosure of Costs

  • We interpret solving the multi-vehicle routing problem as a team Markov game with partially observable costs. For a given set of customers to serve, the playing agents (vehicles) have the common goal to determine the team-optimal agent routes with minimal total cost. Each agent thereby observes only its own cost. Our multi-agent reinforcement learning approach, the so-called multi-agent Neural Rewriter, builds on the single-agent Neural Rewriter to solve the problem by iteratively rewriting solutions. Parallel agent action execution and partial observability require new rewriting rules for the game. We propose the introduction of a so-called pool in the system which serves as a collection point for unvisited nodes. It enables agents to act simultaneously and exchange nodes in a conflict-free manner. We realize limited disclosure of agent-specific costs by only sharing them during learning. During inference, each agents acts decentrally, solely based on its own cost. First empiricalWe interpret solving the multi-vehicle routing problem as a team Markov game with partially observable costs. For a given set of customers to serve, the playing agents (vehicles) have the common goal to determine the team-optimal agent routes with minimal total cost. Each agent thereby observes only its own cost. Our multi-agent reinforcement learning approach, the so-called multi-agent Neural Rewriter, builds on the single-agent Neural Rewriter to solve the problem by iteratively rewriting solutions. Parallel agent action execution and partial observability require new rewriting rules for the game. We propose the introduction of a so-called pool in the system which serves as a collection point for unvisited nodes. It enables agents to act simultaneously and exchange nodes in a conflict-free manner. We realize limited disclosure of agent-specific costs by only sharing them during learning. During inference, each agents acts decentrally, solely based on its own cost. First empirical results on small problem sizes demonstrate that we reach a performance close to the employed OR-Tools benchmark which operates in the perfect cost information setting.zeige mehrzeige weniger

Volltext Dateien herunterladen

  • Poster__MANR_final.pdf
    eng

Metadaten exportieren

Weitere Dienste

Teilen auf Twitter Suche bei Google Scholar Anzahl der Zugriffe auf dieses Dokument
Metadaten
Autor*innen:Alexander Kister
Koautor*innen:S. Wrobel, T. Wirtz, N. Paul
Dokumenttyp:Posterpräsentation
Veröffentlichungsform:Präsentation
Sprache:Englisch
Jahr der Erstveröffentlichung:2022
Organisationseinheit der BAM:VP Vizepräsident
VP Vizepräsident / VP.1 eScience
DDC-Klassifikation:Naturwissenschaften und Mathematik / Chemie / Analytische Chemie
Freie Schlagwörter:Deep Learning; Reinforcement learning; Vehicle Routing
Themenfelder/Aktivitätsfelder der BAM:Chemie und Prozesstechnik
Veranstaltung:Gamification and Multiagent Solutions Workshop (ICLR 2022)
Veranstaltungsort:Online meeting
Beginndatum der Veranstaltung:29.04.2022
Zugehöriger Identifikator:https://arxiv.org/abs/2206.05990
Verfügbarkeit des Dokuments:Datei im Netzwerk der BAM verfügbar ("Closed Access")
Datum der Freischaltung:22.12.2022
Referierte Publikation:Nein
Einverstanden
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.