Questions about Universal approximation theorem

Short answers, pulled from the story.

What did George Cybenko prove in 1989 about feedforward neural networks?

George Cybenko proved that feedforward neural networks with one hidden layer can approximate any continuous function to arbitrary accuracy. His technical report established the property known as universality for these systems.

When did Maxwell Stinchcombe and Halbert White extend the universal approximation theorem?

Maxwell Stinchcombe and Halbert White extended the findings of the universal approximation theorem shortly after 1989. Their work applied the principles to multilayer feed-forward networks during that same year.

How does network width affect the ability to achieve universal approximation according to Zhou Lu?

Zhou Lu showed that networks of width n plus 4 could approximate any Lebesgue-integrable function if depth grew sufficiently. If width was less than or equal to n, this expressive power was lost entirely.

Who determined the optimal minimum width bound for universal approximation in 2023?

Cai determined the optimal minimum width bound for universal approximation in 2023. This result specifies exactly how many neurons are needed to approximate a given function within a specific distance metric.

What extensions of the universal approximation theorem exist for graph neural networks?

Brüel-Gabrielsson established a universal approximation theorem result for graphs in 2020 showing injective properties were sufficient. Graph convolutional neural networks can be made as discriminative as the Weisfeiler Leman graph isomorphism test.