close
close

How to define artificial general intelligence

The idea that machines can outsmart humans has long been the subject of science fiction. Rapid improvements in artificial intelligence (AI) programs over the past decade have led some experts to conclude that science fiction could soon become reality. On March 19, Jensen Huang, the CEO of Nvidia, the world’s largest computer chip maker and the third most valuable publicly traded company, said he believed current models would reach the point of so-called artificial general intelligence (AGI) within five years. could achieve. years. What exactly is AGI – and how can we judge when it has arrived?

Mr Huang’s words should be taken with a grain of salt: Nvidia’s profits have soared due to growing demand for its high-tech chips, which are used to train AI models. So promoting AI is good for business. But Mr. Huang did provide a clear definition of what he said would constitute AGI: a program that performs 8% better than most people on certain tests, such as bar exams for lawyers or logic quizzes.

This proposal is the latest in a long line of definitions. In the 1950s, Alan Turing, a British mathematician, said that talking to a model that had achieved AGI would be indistinguishable from talking to a human. The most advanced large language models have probably already passed the Turing test. But in recent years, technology leaders have pushed the boundaries by proposing a host of new definitions. Mustafa Suleyman, co-founder of DeepMind, an AI research firm, and CEO of a newly created AI division within Microsoft, believes that what he calls “artificial intelligence” – a “modern Turing test” – will be achieved when a model gets $100,000 and changes it to $1 million without instruction. (Mr. Suleyman is a board member of The Economist’s parent company.) Steve Wozniak, Apple’s co-founder, has a more prosaic vision of AGI: a machine that can come into an average home and make a cup of coffee.

Some researchers reject the concept of AGI altogether. Mike Cook, from King’s College London, says the term has no scientific basis and means different things to different people. Few definitions of AGI reach consensus, admits Harry Law of the University of Cambridge, but most are based on the idea of ​​a model that can outperform humans at most tasks – whether it’s making coffee or making millions. In January, researchers at DeepMind proposed six AGI levels, ranked by the share of skilled adults a model can outperform: They say the technology has only reached the lowest level, with AI tools equal to or slightly are better than those of an uneducated person.

The question of what happens when we reach AGI is on the minds of some researchers. Eliezer Yudkowsky, a computer scientist who has worried about AI for two decades, worries that by the time people realize that models have become sentient, it will be too late to stop them and humans will become enslaved. But few researchers share his view. Most believe that AI simply follows human input, often in a bad way.

There may not be a consensus among academics or businesspeople on what constitutes AGI, but a definition could soon be agreed upon in court. As part of a lawsuit filed in February against OpenAI, a company he co-founded, Elon Musk is asking a California court to decide whether the company’s GPT-4 model shows signs of AGI. If so, Mr. Musk claims, then OpenAI has gone against its founding principle that it will only license pre-AGI technology. The company denies doing this. Through his lawyers, Musk is seeking a jury trial. Should his wish be granted, a handful of non-experts could decide a question that has plagued AI experts for decades.

Editor’s note: This piece has been updated to clarify Mustafa Suleyman’s concept

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com

Back To Top