For some time now, companies like OpenAI and Google have been touting advanced “inference” capabilities as the next big step in modern artificial intelligence models. But now, new research by six Apple engineers shows that the mathematical “reasoning” displayed by advanced large-scale language models can be extremely fragile and unreliable in the face of seemingly trivial changes to common benchmark problems. It has been shown that sex may be reduced.
The vulnerabilities highlighted by these new results suggest that LLM’s use of probabilistic pattern matching lacks the formal understanding of the underlying concepts required for truly reliable mathematical reasoning ability. It helps corroborate previous research. “Current LLMs are incapable of true logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to reproduce the inference steps observed in the training data.”
mix it up
In “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models,” now available as a preprint paper, six Apple researchers create a set of more than 8,000 standardized elementary-level math word problems in GSM8K. Start with This is used frequently. Used as a benchmark for the complex reasoning capabilities of modern LLMs. We then take a new approach by modifying parts of that test set to dynamically replace certain names and numbers with new values. So the question that Sophie got 31 building blocks for her nephew in GSM8K could become the question that Bill got 19 building blocks. His brother in the new GSM symbolic rating.
This approach helps avoid “data pollution” that can arise from static GSM8K questions that are input directly into the AI model’s training data. At the same time, these incidental changes do not at all change the actual difficulty of inherent mathematical reasoning. This means that the model should theoretically test just as well in GSM-Symbolic as in GSM8K.
Instead, the researchers tested more than 20 state-of-the-art LLMs on GSM-Symbolic and found that average accuracy decreased across the board compared to GSM8K, with performance decreasing by 0.3 percent to 9.2 percent depending on the model. I understand that. The results also showed high variance even when running GSM-Symbolic 50 times with different names and values. Within a single model, it is common to have an accuracy gap of up to 15% between the best and worst runs, and for some reason changing the numbers is less accurate than changing the names. There was a tendency to
As the researchers point out, “the overall inference steps required to solve the problem remain the same,” so this kind of difference is important when comparing results within different GSM symbolic runs and with GSM8K. Both cases are more than a little surprising. The fact that such small changes lead to such fluctuating results suggests that these models are not making “formal” inferences, but are instead “trying to perform some kind of within-distribution pattern matching.” It suggests to researchers that they are trying to match their results to the questions they are asked. Compare the steps in your solution to similar steps found in your training data. ”
don’t get distracted
Still, the overall variance exhibited by GSM symbolic tests was often relatively small overall. For example, OpenAI’s ChatGPT-4o dropped from 95.2 percent accuracy on GSM8K to a still impressive 94.9 percent on GSM-Symbolic. This is a fairly high success rate using either benchmark, regardless of whether or not the model itself uses “formal” reasoning behind the scenes (although it is important to note that if the researcher takes one logical step to the problem) After adding just one or two, the total accuracy of many models dropped sharply).
However, because Apple researchers modified the GSM-Symbolic benchmark by adding “a seemingly related but ultimately unimportant statement” to the question, the LLM results tested were even more It got worse. This “GSM-NoOp” benchmark set (short for “No Operations”) changes the question to “how many kiwis did you pick over multiple days” to “5 of them (kiwis) were a bit small.” May include incidental details. than average. ”
Adding these dangerous issues results in what the researchers called a “catastrophic performance drop” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent depending on the model tested. did. Such a significant drop in accuracy highlights the inherent limitations of using simple “pattern matching” to “translate sentences into operations without truly understanding their meaning,” the study says. they wrote.