Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

A Theory for Understanding, Designing, and Training Deep Learning Systems

CORDIS proporciona enlaces a los documentos públicos y las publicaciones de los proyectos de los programas marco HORIZONTE.

Los enlaces a los documentos y las publicaciones de los proyectos del Séptimo Programa Marco, así como los enlaces a algunos tipos de resultados específicos, como conjuntos de datos y «software», se obtienen dinámicamente de OpenAIRE .

Publicaciones

Neural Networks with Small Weights and Depth-Separation Barriers

Autores: Gal Vardi and Ohad Shamir
Publicado en: 2020
Editor: NeurIPS 2020

On Margin Maximization in Linear and ReLU Networks

Autores: Gal Vardi, Gilad Yehudai, Ohad Shamir
Publicado en: 36th Conference on Neural Information Processing Systems (NeurIPS 2022), Edición 36, 2022
Editor: NeurIPS

Initialization-dependent sample complexity of linear predictors and neural networks

Autores: R Magen, O Shamir
Publicado en: 2023
Editor: Advances in Neural Information Processing Systems

When Models Don't Collapse: On the Consistency of Iterative MLE

Autores: D Barzilai, O Shamir
Publicado en: 2025
Editor: NeurIPS 2025

Implicit Regularization Towards Rank Minimization in ReLU Networks

Autores: Nadav Timor, Gal Vardi and Ohad Shamir
Publicado en: 2023
Editor: ALT

The Sample Complexity of One-Hidden-Layer Neural Networks

Autores: Gal Vardi, Ohad Shamir and Nathan Srebro
Publicado en: 2022
Editor: NeurIPS

Learning a Single Neuron with Bias Using Gradient Descent

Autores: Gal Vardi, Gilad Yehudai, Ohad Shamir
Publicado en: 2021
Editor: NeurIP

Deterministic nonsmooth nonconvex optimization

Autores: M Jordan, G Kornowski, T Lin, O Shamir, M Zampetakis
Publicado en: 2023
Editor: Proceedings of COLT 2023

Width is Less Important than Depth in ReLU Neural Networks

Autores: Gal Vardi, Gilad Yehudai and Ohad Shamir
Publicado en: 2022
Editor: COLT

Open Problem: Anytime Convergence Rate of Gradient Descent

Autores: G Kornowski, O Shamir
Publicado en: 2024
Editor: Proceedings of COLT 2024

Gradient Methods Provably Converge to Non-Robust Networks

Autores: Gal Vardi, Gilad Yehudai and Ohad Shamir
Publicado en: 2022
Editor: NeurIPS

Reconstructing Training Data from Trained Neural Networks

Autores: Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani
Publicado en: 2022
Editor: NeurIPS

The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

Autores: Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir
Publicado en: 2021
Editor: COLT 2021

The Effects of Mild Over-parameterization on the Optimization Landscapeof Shallow ReLU Neural Networks

Autores: Itay Safran, Gilad Yehudai and Ohad Shamir
Publicado en: 2021
Editor: COLT 2021

Implicit Regularization in ReLU Networks with the Square Loss

Autores: Gal Vardi and Ohad Shamir
Publicado en: 2021
Editor: COLT 2021

Oracle Complexity in Nonsmooth Nonconvex Optimization

Autores: Guy Kornowski and Ohad Shamir
Publicado en: 2021
Editor: NeurIPS

Depth Separation in Norm-Bounded Infinite-Width Neural Networks

Autores: S Parkinson, G Ongie, R Willett, O Shamir, N Srebro
Publicado en: 2024
Editor: Proceedings of COLT 2024

The Implicit Bias of Benign Overfitting

Autores: Ohad Shamir
Publicado en: 2022
Editor: COLT

Reconstructing Training Data from Multiclass Neural Networks

Autores: Buzaglo, Gon; Haim, Niv; Yehudai, Gilad; Vardi, Gal; Irani, Michal
Publicado en: Workshop on the pitfalls of limited data and computation for Trustworthy ML, ICLR 2023, 2023
Editor: ICLR

Logarithmic Width Suffices for Robust Memorization

Autores: A Egosi, G Yehudai, O Shamir
Publicado en: 2025
Editor: COLT 2025

Are ResNets Provably Better than Linear Predictors?

Autores: Ohad Shamir
Publicado en: 2018
Editor: NeurIPS conference

On the Power and Limitations of Random Features for Understanding Neural Networks

Autores: Gilad Yehudai and Ohad Shamir
Publicado en: 2019
Editor: NeurIPS conference

Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks

Autores: Ohad Shamir
Publicado en: 2019
Editor: COLT conference

Depth Separations in Neural Networks: What is Actually Being Separated?

Autores: Itay Safran, Ronen Eldan, Ohad Shamir
Publicado en: 2019
Editor: COLT conference

The Complexity of Making the Gradient Small in Stochastic Convex Optimization

Autores: Dylan Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake Woodworth
Publicado en: 2019
Editor: COLT conference

Learning a Single Neuron with Gradient Methods

Autores: Gilad Yehudai and Ohad Shamir
Publicado en: 2020
Editor: COLT 2020

How Good is SGD with Random Shuffling?

Autores: Itay Safran and Ohad Shamir
Publicado en: 2020
Editor: COLT 2020

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Autores: Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir
Publicado en: 2020
Editor: ICML 2020

From tempered to benign overfitting in relu neural networks

Autores: Guy Kornowski, Gilad Yehudai, Ohad Shamir
Publicado en: 2023
Editor: Advances in Neural Information Processing Systems

Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems

Autores: Itay Safran and Ohad Shamir
Publicado en: 2021
Editor: NeurIPS

On the Optimal Memorization Power of ReLU Neural Networks

Autores: Gal Vardi, Gilad Yehudai, Ohad Shamir
Publicado en: The Optimal Memorization Power of ReLU Neural Networks ICLR 2022, 2022
Editor: ICLR

Size and Depth Separation in Approximating Natural Functions with Neural Networks

Autores: Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir
Publicado en: 2021
Editor: COLT 2021

Generalization in kernel regression under realistic assumptions

Autores: Daniel Barzilai, Ohad Shamir
Publicado en: 2024
Editor: Proceedings of ICML 2024

Beyond Benign Overfitting in Nadaraya-Watson Interpolators

Autores: D Barzilai, G Kornowski, O Shamir
Publicado en: 2025
Editor: NeurIPS 2025

Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses

Autores: Buzaglo, Gon; Haim, Niv; Yehudai, Gilad; Vardi, Gal; Oz, Yakir; Nikankin, Yaniv; Irani, Michal
Publicado en: 37th Conference on Neural Information Processing Systems (NeurIPS 2023), 2023
Editor: Neural Information Processing Systems

The oracle complexity of simplex-based matrix games: Linear separability and nash equilibria

Autores: G Kornowski, O Shamir
Publicado en: 2025
Editor: COLT 2025

On the Hardness of Meaningful Local Guarantees in Nonsmooth Nonconvex Optimization

Autores: Guy Kornowski, Swati Padmanabhan, Ohad Shamir
Publicado en: 2024
Editor: NeurIPS Workshop on Optimization for Machine Learning 2024

Gradient Methods Never Overfit On Separable Data

Autores: Ohad Shamir
Publicado en: Journal of Machine Learning Research, 2020, ISSN 1533-7928
Editor: None (independent electronic journal)

An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization

Autores: G Kornowski, O Shamir
Publicado en: Journal of Machine Learning Research, 2024, ISSN 1533-7928
Editor: Journal of Machine Learning Research

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds

Autores: Odelia Melamed, Gilad Yehudai, Gal Vardi
Publicado en: 2023
Editor: arXiv

Simple Relative Deviation Bounds for Covariance and Gram Matrices

Autores: Daniel Barzilai, Ohad Shamir
Publicado en: 2024
Editor: arXiv

Hardness of learning fixed parities with neural networks

Autores: Itamar Shoshani, Ohad Shamir
Publicado en: 2025
Editor: arXiv

Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?

Autores: Ohad Shamir
Publicado en: 2020
Editor: arXiv preprint

On the Complexity of Finding Small Subgradients in Nonsmooth Optimization

Autores: Guy Kornowski and Ohad Shamir
Publicado en: 2022
Editor: arXiv

Buscando datos de OpenAIRE...

Se ha producido un error en la búsqueda de datos de OpenAIRE

No hay resultados disponibles

Mi folleto 0 0