Source: https://www.axios.com/2025/07/21/openai-deepmind-math-olympiad-ai

Google DeepMind and OpenAI both achieved gold medal-level performance at this year’s International Mathematical Olympiad (IMO) over the weekend — but only Google officially entered the competition.

The intrigue: Google DeepMind’s model results were officially certified by the IMO, but OpenAI released their own results first, highlighting the speed and urgency in the race to build the best model for math and reasoning.

The big picture: OpenAI didn’t enter the competition, but evaluated its model on the 2025 IMO problems, after seeing the model’s performance on related tasks.

  • Researcher Alexander Wei shared OpenAI’s results on X on July 19.
  • The model abided by the same rules as human contestants, including two 4.5-hour exam sessions using no internet or other tools.
  • Google announced today that an advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points.
  • That’s the same score OpenAI announced.

Stunning stat: Only 67 of the 630 (roughly 10%) of contestants received gold medals this year.

  • The IMO is an elite math competition for high school students, drawing participants from over 100 countries. It was held in Australia this year.

Between the lines: The results from both companies show how far general-purpose models have progressed in solving advanced math problems and delivering the answers in natural language proofs.

  • Models that previously beat humans in Go, Poker and other competitions were trained specifically for those games.
  • The new high-performing models are general purpose models, the same ones they train for language, coding and science.
  • The results show that these models can perform better than those that have been hand-tuned for specific tasks.

Why it matters: AI models are unusually difficult to benchmark because of the speed at which the tech is moving and the lack of a standard benchmarking system.

Zoom out: Both models are experimental and won’t be released to the public “for a while,” OpenAI says. “Many months,” according to a post on X from OpenAI CEO Sam Altman.

  • The model that competed in the IMO is “actually very close to the main Gemini model that we have been offering to people,” Google DeepMind senior staff research scientist Thang Luong told Axios.
  • “So we are very confident that we can bring [the model] into the hands of our trusted testers very soon, especially the mathematicians,” Luong says.
  • “We hope that this will empower mathematicians so they can crack harder and harder problems.”

What they’re saying: “When we first started OpenAI, this was a dream but not one that felt very realistic to us,” OpenAI CEO Sam Altman said in a post on X.

  • “It is a significant marker of how far AI has come over the past decade.”
  • “Our leap from silver to gold medal-standard in just one year shows a remarkable pace of progress in AI,” Google wrote on its blog.

Yes, but: Both Google and OpenAI praised the high school students participating in the Olympiad and were careful not to frame the competition as a bots vs. humans cage match.

  • The purpose of the IMO is to promote the “beauty of mathematics” to high school students and to encourage them to go into the field, Junehyuk Jung, associate professor at Brown University and visiting researcher at Google DeepMind, told Axios.
  • Jung was a participant in the IMO 22 years ago.

Google waited for the IMO to officially certify the competition results rather than release its results over the weekend out of respect for the students in the competition, Luong said.

  • In his post on X, Wei pointed out that OpenAI employs many former IMO participants and called them “some of the brightest young minds of the future.”