TY - JOUR A1 - Bienenstock, Daniel A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - Principled Deep Neural Network Training through Linear Programming N2 - Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained. In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming P≠NP, and improving the dependence on the parameter space dimension remains open. In particular, we obtain polynomial time algorithms for training for a given fixed network architecture. Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting. Y1 - 2018 ER - TY - JOUR A1 - Faenza, Yuri A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - New Limits of Treewidth-based tractability in Optimization JF - Mathematical Programming Y1 - 2020 U6 - https://doi.org/10.1007/s10107-020-01563-5 N1 - URL of the PDF: http://link.springer.com/article/10.1007/s10107-020-01563-5 N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2018/09/22/treewidth-abstract.html VL - 191 SP - 559 EP - 594 ER -