TY - CONF A1 - Kleinsorge, Alexander A1 - Fauck, Alexander A1 - Kupper, Stefan A2 - Reiff-Stephan, Jörg A2 - Beuster, Anja T1 - A Novel Exponential Continuous Learning Rate Adaption Gradient Descent Optimization Method TI - Wildauer Konferenz für Künstliche Intelligenz 2025 (WiKKI25) N2 - We present two novel, fast gradient based optimizer algorithms with dynamic learning rate. The main idea is to adapt the learning rate α by situational awareness, mainly striving for orthogonal neighboring gradients. The method has a high success and fast convergence rate and relies much less on hand-tuned hyper-parameters, providing greater universality. It scales linearly (of order O(n)) with dimension and is rotation invariant, thereby overcoming known limitations. The method is presented in two variants C2Min and P2Min, with slightly different control. Their impressive performance is demonstrated by experiments on several benchmark data-sets (ranging from MNIST to Tiny ImageNet) against the state-of-the-art optimizers Adam and Lion. T3 - TH Wildau Engineering and Natural Sciences Proceedings - 2 KW - neural network KW - optimizer KW - training Y1 - 2025 UR - https://opus4.kobv.de/opus4-th-wildau/frontdoor/index/index/docId/2078 UR - https://nbn-resolving.org/urn:nbn:de:kobv:526-opus4-20785 PB - TIB Open Publishing CY - Hannover ER -