The post Forget AGI—Top AI Models Still Struggle With Math appeared on BitcoinEthereumNews.com. In brief MATHVISTA, built with more than 6,000 annotated datapointsThe post Forget AGI—Top AI Models Still Struggle With Math appeared on BitcoinEthereumNews.com. In brief MATHVISTA, built with more than 6,000 annotated datapoints

Forget AGI—Top AI Models Still Struggle With Math

2026/03/18 20:42
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

In brief

  • MATHVISTA, built with more than 6,000 annotated datapoints from Sahara AI, tests AI models on multimodal math reasoning.
  • GPT-4V scored 49.9%, the highest result among 12 models tested, but still 10.4 percentage points below human performance.
  • Researchers say progress toward AGI may depend less on model size than on better training and evaluation data.

Artificial general intelligence, or AGI, is often described as a system that can perform across many domains the way humans do. Results released this week from the MATHVISTA benchmark test show current models still fall short of that goal.

Researchers from Microsoft Research, Sahara AI, and Emory University tested capabilities central to general intelligence, mathematical reasoning grounded in visual information, including charts, graphs, and diagrams.

Across 12 foundation models tested, including ChatGPT, Gemini, and Claude, GPT-4 Vision scored highest at 49.9%. Human participants averaged 60.3%, highlighting a gap between current AI systems and the broader reasoning ability often associated with AGI.

“We want the machine to do things that a normal, average person can do for their daily tasks,” Principal Researcher at Microsoft Research Hao Cheng told Decrypt. “That’s basically what everybody is pursuing for AGI.”

By putting problems into images, diagrams, and plots, the project tests whether models can accurately interpret visual information and solve multi-step mathematical and logical problems—skills that go beyond pattern-matching on text alone.

Models still struggle with those tasks, and measuring that limitation is difficult.

When Cheng’s team reviewed existing evaluation datasets, many included problems that did not require visual reasoning. Models often reached correct answers by relying solely on text.

“Which is not ideal,” Cheng said.

MathVista, available on GitHub and Hugging Face, launched in October 2023. Since then, it has been downloaded more than 275,000 times, including more than 13,000 downloads in the past month, according to Microsoft Research.

Creating the dataset required more than standard data labeling, however. Microsoft Research needed annotators who could work through problems across arithmetic, algebra, geometry, and statistics, while distinguishing deeper mathematical reasoning, such as interpreting graphs or solving equations, from simpler tasks like counting objects or reading numbers.

After a pilot phase, Microsoft selected Sahara AI to support the effort. The company provided trained annotators, custom workflows, and multi-stage quality checks to produce more than 6,000 multimodal examples used in the benchmark.

Without reliable benchmarks, measuring progress toward broader machine intelligence becomes difficult, according to Sean Ren, CEO of Sahara AI and an associate professor of computer science at USC

“There’s this nuance of data contamination, where once we start using this dataset to test, those results get absorbed into the next version,” Ren told Decrypt. “So you don’t really know if they are solving just a data set, or they have the capability.”

If benchmark answers appear in a model’s training data, high scores can reflect memorization rather than reasoning. That makes it harder to determine whether AI systems are actually improving.

Researchers also point to limits in training data. Much of the publicly available internet has already been incorporated into model datasets.

“You definitely need to have some way to inject some of the new knowledge into this process,” Cheng said. “I think this kind of thing has to come from high-quality data so that we can actually break this knowledge boundary.”

One proposed path involves simulated environments where models can interact, learn from experience, and improve through feedback.

“You create a twin world or a mirror of the real world inside some sandbox so the model can play and do a lot of things humans do in real life, so that it can basically break the boundary of the internet,” Cheng said.

Ren said humans may still play an important role in improving AI systems. While models can generate content quickly, humans remain better at evaluating it.

“That kind of gap between human and AI, where they’re good at, where they’re not good at, can be leveraged to really improve the AI down the road,” he said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/361474/forget-agi-top-ai-models-still-struggle-with-math

시장 기회
Delysium 로고
Delysium 가격(AGI)
$0.01072
$0.01072$0.01072
-0.83%
USD
Delysium (AGI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

Roll the Dice & Win Up to 1 BTC

Roll the Dice & Win Up to 1 BTCRoll the Dice & Win Up to 1 BTC

Invite friends & share 500,000 USDT!