MAROKO133 Eksklusif ai: Meta researchers open the LLM black box to repair flawed AI reason

📌 MAROKO133 Eksklusif ai: Meta researchers open the LLM black box to repair flawed

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational errors as the model solves a problem.

Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a model’s faulty reasoning on the fly.

The technique could help solve one of the great challenges of AI: Ensuring a model’s reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.

Investigating chain-of-thought reasoning

Chain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1. 

However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.

Current remedies for verifying CoT fall into two main categories. “Black-box” approaches analyze the final generated token or the confidence scores of different token options. “Gray-box” approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations. 

But while these methods can detect that a model’s internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.

A white-box approach to verification

CRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the model’s reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.

To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.

With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a “diagnostic classifier” model is trained on these fingerprints to predict whether the reasoning step is correct or not.

At inference time, the classifier monitors the activations of the model and provides feedback on whether the model’s reasoning trace is on the right track.

Finding and fixing errors

The researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.

The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.

Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).

The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly. 

This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, “these findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.” To support further research, the team plans to release its datasets and trained transcoders to the public.

Why it’s important

While CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.

This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of c…

Konten dipersingkat otomatis.

đź”— Sumber: venturebeat.com


📌 MAROKO133 Hot ai: Matrix collapses: Mathematics proves the universe cannot be a

The idea that we might be living inside a vast computer simulation, much like in The Matrix, has fascinated philosophers and scientists for years. But a new study from researchers at the University of British Columbia’s Okanagan campus has delivered a decisive blow to that theory.

According to Dr. Mir Faizal, Adjunct Professor at UBC Okanagan’s Irving K. Barber Faculty of Science, and his international collaborators, the structure of reality itself makes simulation impossible.

Their work shows that no computer, no matter how advanced, could ever reproduce the fundamental workings of the universe.

Their research goes further than rejecting the simulation theory. It suggests that reality is built on a kind of understanding that cannot be reduced to computational rules or algorithms.

The researchers approached the simulation question through mathematics and physics rather than philosophy. They explored whether the laws governing the universe could, in theory, be recreated by a computer system.

“It has been suggested that the universe could be simulated,” says Dr. Faizal. “If such a simulation were possible, the simulated universe could itself give rise to life, which in turn might create its own simulation.

This recursive possibility makes it seem highly unlikely that our universe is the original one, rather than a simulation nested within another simulation.”

He explains that this question was once thought to lie outside the reach of science. “However, our recent research has demonstrated that it can, in fact, be scientifically addressed.”

Modern physics already sees reality as more than solid matter. Quantum gravity, the latest framework uniting relativity and quantum mechanics, proposes that even space and time are not fundamental. They emerge from something deeper: pure information.

The limits of logic

To test whether this informational foundation could be simulated, the team relied on mathematical principles, including Gödel’s incompleteness theorem.

This theorem states that in any logical system, there will always be true statements that cannot be proven within that system.

Dr. Faizal says the same limitation applies to physics. “We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity,” he explains.

“Therefore, no physically complete and consistent theory of everything can be derived from computation alone.”

He adds that a full description of reality requires what they call “non-algorithmic understanding,” which cannot be captured by any computer process.

In other words, computation can only go so far before it hits the edge of what logic can express.

Why the universe can’t be simulated

Could the deeper “information” of the universe itself be simulated? The researchers say that even this is impossible.

“Drawing on mathematical theorems related to incompleteness and indefinability, we demonstrate that a fully consistent and complete description of reality cannot be achieved through computation alone,” says Dr. Faizal.

“It requires non-algorithmic understanding, which by definition is beyond algorithmic computation and therefore cannot be simulated. Hence, this universe cannot be a simulation.”

Co-author Dr. Lawrence M. Krauss says this discovery changes how we view the laws of physics. “The fundamental laws of physics cannot be contained within space and time, because they generate them,” he notes.

“A complete and consistent description of reality requires something deeper—a form of understanding known as non-algorithmic understanding.”

Dr. Faizal concludes that any simulated world must follow programmed rules. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation,” he says.

The study is published in the Journal of Holography Applications in Physics.

đź”— Sumber: interestingengineering.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna