Mathematics is usually considered the best area for measuring AI progress successfully. Math’s step-by-step logic is simple to trace, and its definitive routinely verifiable solutions take away any human or subjective components. However AI techniques are enhancing at such a tempo that math benchmarks are struggling to keep up.
Means again in November 2024, non-profit analysis group Epoch AI quietly launched Frontier Math. A standardized, rigorous benchmark, Frontier Math was designed to measure the mathematical reasoning capabilities of the newest AI instruments.
“It’s a bunch of actually arduous math issues,” explains Greg Burnham, Epoch AI Senior Researcher. “Initially, it was 300 issues that we now name tiers 1–3, however having seen AI capabilities actually pace up, there was a sense that we needed to run to remain forward, so now there’s a particular problem set of additional rigorously constructed issues that we name tier 4.”
To a tough approximation, tiers 1–4 go from superior undergraduate by to early postdoc stage arithmetic. When launched, state-of-the-art AI models have been unable to unravel greater than 2% of the issues Frontier Math contained. Fast forward to today and the most effective publicly out there AI fashions, corresponding to ChatGPT 5.2 Professional and Claude Opus 4.6, are fixing over 40% of Frontier Math’s 300 tiers 1–3 issues, and over 30% of the 50 tier 4 issues.
AI takes on PhD stage arithmetic
And this dizzying tempo of development is exhibiting no indicators of abating. For instance, only recently Google DeepMind announced that Aletheia, an experimental AI system derived from Gemini Deep Suppose, achieved publishable PhD level research results. Although obscure mathematically—calculating sure construction constants in arithmetic geometry known as eigenweights—the result’s important by way of AI improvement.
“They’re claiming it was primarily autonomous, which means a human wasn’t guiding the work, and it’s publishable,” Burnham says. “It’s positively on the decrease finish of the spectrum of labor that will get a mathematician excited, however it’s new—it’s one thing we actually haven’t actually seen earlier than.”
To put this achievement in context, each Frontier Math drawback has a recognized reply {that a} human has derived. Although a human might most likely have achieved Aletheia’s outcome “in the event that they sat down and steeled themselves for every week,” says Burnham, no human had ever executed so.
Aletheia’s outcomes and different latest achievements by AI mathematicians level to new, more durable benchmarks being wanted to grasp AI capabilities, and quick, as a result of current ones will quickly grow to be irrelevant. “There are simpler math benchmarks which might be already out of date, a number of generations of them,” says Burnham. “Frontier Math will most likely saturate [meaning state-of-the-art AI models score 100%] throughout the subsequent two years; might be sooner.”
The First Proof problem
To start to handle this drawback, on February 6, a gaggle of 11 extremely distinguished mathematicians proposed the First Proof challenge, a set of 10 extraordinarily tough math questions which arose naturally within the authors’ analysis processes, and whose proofs are roughly 5 pages or much less and had not been shared with anybody. The First Proof challenge was a preliminary effort to evaluate the capabilities of AI techniques in fixing research-level math questions on their very own.
Producing severe buzz within the math neighborhood, skilled and beginner mathematicians, and groups together with OpenAI, all stepped as much as the problem. However by the point the authors posted the proofs on February 14, nobody had submitted right options to all 10 issues.
In actual fact, removed from it. The authors themselves solely solved two of the ten issues utilizing Gemini 3.0 Deep Suppose and ChatGPT 5.2 Professional. And most exterior submissions fared little higher, aside from OpenAI. With “restricted human supervision” OpenAI’s most superior inner AI system solved five of the 10 problems—a outcome met with a spectrum of feelings by totally different members of the arithmetic neighborhood, from awe to disappointment. The workforce behind First Proof plans an excellent more durable second round on March 14.
A brand new frontier for AI
“I believe First Proof is terrific: it’s as shut as you possibly can realistically get to placing an AI system within the footwear of a mathematician,” says Burnham. Although he admires how First Proof checks AI’s mathematical utility for a variety of arithmetic and mathematicians, Epoch AI has its personal new method to testing—Frontier Math: Open Problems. Uniquely, the pilot benchmark consists of 14 open issues (with extra to observe) from analysis arithmetic that skilled mathematicians have tried and failed to unravel. Since Open Issues’ release on January 27, none have been solved by an AI.
“With Open Issues, we’ve tried to make it tougher,” says Burnham. “The baseline by itself can be publishable, not less than in a specialty journal.” What’s extra, every query is designed in order that it may be routinely graded. “This can be a bit counterintuitive,” Burnham provides. “Nobody is aware of the solutions, however now we have a pc program that may be capable to decide whether or not the reply is true or not.”
Burnham sees First Proof and Open Issues as being complementary. “I might say understanding AI capabilities is a more-the-merrier state of affairs,” he provides. “AI has gotten to the purpose the place it’s, in some methods, higher than most PhD college students, so we have to pose issues the place the reply can be not less than reasonably fascinating to some human mathematicians, not as a result of AI was doing it, however as a result of it’s arithmetic that human mathematicians care about.”
From Your Website Articles
Associated Articles Across the Net
