OpenAI recently unveiled a five-tier system to assess its progress toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke to Bloomberg. The company shared this new classification system with employees on Tuesday during an all-hands meeting, aiming to provide a clear framework for understanding AI progress. However, the system describes hypothetical technology that does not yet exist and is perhaps best interpreted as a marketing ploy to raise investment dollars.
OpenAI has previously stated that AGI — a nebulous term for a hypothetical concept that implies an AI system that can perform new tasks like a human without specialized training — is currently the company’s primary goal. The pursuit of technology that can replace humans in most intellectual work drives most of the sustained hype over the firm, even though such technology is likely to be extremely disruptive to society.
OpenAI CEO Sam Altman has previously stated his belief that AGI can be achieved within this decade, and much of the CEO’s public messaging has been related to how the company (and society at large) can address the disruption that AGI can bring. Along these lines, a ranking system to communicate domestically achieved artificial intelligence milestones on the road to AGI makes sense.
OpenAI’s five levels — which it plans to share with investors — range from current AI capabilities to systems that can potentially manage entire organizations. The company believes its technology (such as the GPT-4o that powers ChatGPT) is currently at Level 1, which includes AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff that they are on the verge of reaching Level 2, called “The Reasoners.”
Bloomberg lists the five “Stages of Artificial Intelligence” of OpenAI as follows:
- Level 1: Chatbots, AI with conversational language
- Level 2: Reasoners, human level problem solving
- Level 3: Agents, systems that can take action
- Level 4: Innovators, AI that can help invent
- Level 5: Organizations, AI that can do the work of an organization
A Level 2 AI system is said to be capable of solving basic problems at the same level as a person who has a PhD degree but no access to external tools. During everyone’s meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke to Bloomberg .
OpenAI’s higher classification levels describe increasingly powerful hypothetical AI capabilities. Level 3 “Agents” could work autonomously on tasks for days. Level 4 systems would generate new innovations. The pinnacle, Level 5, envisions AI managing entire organizations.
This classification system is still a work in progress. OpenAI plans to collect feedback from employees, investors and board members, potentially improving the levels over time.
Ars Technica asked OpenAI about the ranking system and the accuracy of Bloomberg’s report, and a company spokesperson said they had “nothing to add.”
Problem with AI skill ranking
OpenAI is not alone in trying to measure AI skill levels. As Bloomberg notes, OpenAI’s system feels similar to the levels of autonomous driving defined by automakers. And in November 2023, researchers at Google DeepMind proposed their five-level framework for evaluating AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don’t yet exist.
OpenAI’s classification system is somewhat similar to Anthropic’s “AI Safety Levels” (ASL) first published by assistant creator Claude AI in September 2023. Both systems aim to categorize AI capabilities, though they focus on aspects different. Anthropic’s ASLs are more focused on safety and catastrophic risks (such as ASL-2, which refers to “systems showing early signs of dangerous capabilities”), while OpenAI’s levels follow general capabilities.
However, any AI classification system raises questions about whether it is possible to meaningfully quantify AI progress and what constitutes progress (or even what constitutes a “dangerous” AI system, as in the case of Anthropic). The tech industry has so far had a history of over-promising AI capabilities, and models of linear progression like OpenAI potentially risk driving unrealistic expectations.
There is currently no consensus in the AI ​​research community on how to measure progress toward AGI or even whether AGI is a well-defined or achievable goal. As such, OpenAI’s five-tier system should be seen as a communication tool to entice investors that shows the company’s aspirational goals rather than a scientific or even technical measure of progress.
#OpenAI #approach #progress #reasoning #reveals #framework #progress
Image Source : arstechnica.com