Questions about Artificial general intelligence

Short answers, pulled from the story.

What is artificial general intelligence and how does it differ from narrow AI?

Artificial general intelligence is a hypothetical type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks. Unlike narrow AI systems confined to well-defined tasks, an AGI system can generalize knowledge and transfer skills between domains without task-specific reprogramming.

When did modern AI research begin and what were early predictions about artificial general intelligence?

Modern AI research began in the mid-1950s when researchers believed artificial general intelligence was possible within just a few decades. Herbert A. Simon wrote in 1965 that machines would be capable of doing any work a man could do within twenty years.

How has the Turing Test been used to evaluate progress toward artificial general intelligence?

The Turing Test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. In 2025 a pre-registered study showed GPT-4.5 was judged to be human in 73% of five-minute text conversations surpassing the 67% humanness rate of real confederates.

What are the expert estimates for when artificial general intelligence might arrive as of 2023?

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was between 2040 and 2050 depending on the poll with the mean being 2081. Demis Hassabis said in May 2023 that he expects AGI within a decade or even a few years.

What is the difference between strong AI and weak AI according to philosophical definitions?

John Searle coined the term strong AI in 1980 stating that an artificial intelligence system can have a mind and consciousness while the weak AI hypothesis states that an artificial intelligence system can only act like it thinks and has a mind and consciousness. Mainstream AI focuses on how a program behaves rather than whether it actually has a mind since if the program can behave as if it has a mind there is no need to know if it actually has one.