TY - JOUR A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Deep Neural Network Training with Frank-Wolfe N2 - This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations. Y1 - 2020 ER - TY - CHAP A1 - Zimmer, Max A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - How I Learned to Stop Worrying and Love Retraining T2 - Proceedings of International Conference on Learning Representations Y1 - 2023 ER - TY - CHAP A1 - Zimmer, Max A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - Sparse Model Soups T2 - Proceedings of International Conference on Learning Representations Y1 - 2024 ER - TY - CHAP A1 - Wäldchen, Stephan A1 - Sharma, Kartikey A1 - Turan, Berkant A1 - Zimmer, Max A1 - Pokutta, Sebastian T1 - Interpretability Guarantees with Merlin-Arthur Classifiers T2 - Proceedings of International Conference on Artificial Intelligence and Statistics N2 - We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks. These guarantees consist of lower bounds on the mutual information between selected features and the classification decision. Our results are inspired by the Merlin-Arthur protocol from Interactive Proof Systems and express these bounds in terms of measurable metrics such as soundness and completeness. Compared to existing interactive setups, we rely neither on optimal agents nor on the assumption that features are distributed independently. Instead, we use the relative strength of the agents as well as the new concept of Asymmetric Feature Correlation which captures the precise kind of correlations that make interpretability guarantees difficult. We evaluate our results on two small-scale datasets where high mutual information can be verified explicitly. Y1 - 2024 ER - TY - CHAP A1 - Mundinger, Konrad A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Extending the Continuum of Six-Colorings T2 - Proceedings of Discrete Mathematics Days Y1 - 2024 ER - TY - JOUR A1 - Mundinger, Konrad A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Extending the Continuum of Six-Colorings JF - Geombinatorics Quarterly Y1 - 2024 ER - TY - CHAP A1 - Pauls, Jan A1 - Zimmer, Max A1 - Kelly, Una M A1 - Schwartz, Martin A1 - Saatchi, Sassan A1 - Ciais, Philippe A1 - Pokutta, Sebastian A1 - Brandt, Martin A1 - Gieseke, Fabian T1 - Estimating canopy height at scale T2 - Proceedings of International Conference on Machine Learning Y1 - 2024 ER -