TY - CONF A1 - Chaukair, Mustafa A1 - Schütte, Christof A1 - Sunkara, Vikram T1 - On the Activation Space of ReLU Equipped Deep Neural Networks T2 - Procedia Computer Science N2 - Modern Deep Neural Networks are getting wider and deeper in their architecture design. However, with an increasing number of parameters the decision mechanisms becomes more opaque. Therefore, there is a need for understanding the structures arising in the hidden layers of deep neural networks. In this work, we present a new mathematical framework for describing the canonical polyhedral decomposition in the input space, and in addition, we introduce the notions of collapsing- and preserving patches, pertinent to understanding the forward map and the activation space they induce. The activation space can be seen as the output of a layer and, in the particular case of ReLU activations, we prove that this output has the structure of a polyhedral complex. Y1 - 2023 UR - https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/9064 VL - 222 SP - 624 EP - 635 ER -