TY - JOUR A1 - Mayer, Sebastian A1 - Classen, Tobias A1 - Endisch, Christian T1 - Modular production control using deep reinforcement learning: proximal policy optimization JF - Journal of Intelligent Manufacturing N2 - EU regulations on CO2 limits and the trend of individualization are pushing the automotive industry towards greater flexibility and robustness in production. One approach to address these challenges is modular production, where workstations are decoupled by automated guided vehicles, requiring new control concepts. Modular production control aims at throughput-optimal coordination of products, workstations, and vehicles. For this np-hard problem, conventional control approaches lack in computing efficiency, do not find optimal solutions, or are not generalizable. In contrast, Deep Reinforcement Learning offers powerful and generalizable algorithms, able to deal with varying environments and high complexity. One of these algorithms is Proximal Policy Optimization, which is used in this article to address modular production control. Experiments in several modular production control settings demonstrate stable, reliable, optimal, and generalizable learning behavior. The agent successfully adapts its strategies with respect to the given problem configuration. We explain how to get to this learning behavior, especially focusing on the agent’s action, state, and reward design. UR - https://doi.org/10.1007/s10845-021-01778-z KW - modular production KW - production control KW - production scheduling KW - deep reinforcement learning KW - proximal policy optimization KW - automotive industry Y1 - 2021 UR - https://doi.org/10.1007/s10845-021-01778-z UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-13092 SN - 1572-8145 VL - 32 IS - 8 SP - 2335 EP - 2351 PB - Springer Nature CY - Cham ER -