Institute of Information Theory and Automation

You are here

Limitations of Shallow Neural Networks

Date: 
2017-03-10 10:00
Room: 
Name of External Lecturer: 
Dr. Věra Kůrková
Affiliation of External Lecturer: 
Institute of Computer Science, Czech Academy of Sciences
Recent successes of deep networks pose a theoretical question: When are deep nets provably better than the shallow ones? Using probabilistic and geometric properties of high-dimensional spaces we will show that for most common types of computational units, almost any uniformly randomly chosen function on a sufficiently large domain cannot be computed by a reasonably sparse shallow network. We will also discuss connections with the No Free Lunch Theorem, the central paradox of coding theory, and pseudo-noise sequences.
soukup: 2017-03-08 14:59