Questions about AlexNet

Short answers, pulled from the story.

Who trained the AlexNet neural network in 2012?

Alex Krizhevsky trained the AlexNet neural network inside his parents' bedroom during 2012. He utilized two Nvidia GTX 580 graphics cards to handle the model's 60 million parameters.

What hardware specifications did the original AlexNet system use?

The AlexNet system used two Nvidia GTX 580 graphics cards each holding 3GB of video memory and costing US$500 when released. The team split the eight-layer architecture across both devices because a single GPU could not hold all 60 million parameters at once.

When did the SuperVision team submit their entry to the ImageNet Large Scale Visual Recognition Challenge?

On the 30th of September 2012, the SuperVision team submitted their entry to the ImageNet Large Scale Visual Recognition Challenge. Their final system combined seven different AlexNet models into an ensemble that achieved a top-5 error rate of 15.3 percent.

How many citations has the original AlexNet paper received as of early 2025?

As of early 2025, the original AlexNet paper has been cited over 184,000 times according to Google Scholar. Subsequent research aimed to train increasingly deep CNNs achieving higher performance on benchmarks following this milestone.

Which earlier neural network concepts influenced the development of AlexNet?

Kunihiko Fukushima proposed the neocognitron concept back in 1980 as an early form of convolutional neural networks. Yann LeCun developed LeNet-5 in 1989 using supervised learning with backpropagation algorithms and Max pooling appeared in speech processing during 1990.