đ MAROKO133 Eksklusif ai: Meta researchers open the LLM black box to repair flawed
Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal âreasoning circuitsâ and detect signs of computational errors as the model solves a problem.
Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a modelâs faulty reasoning on the fly.
The technique could help solve one of the great challenges of AI: Ensuring a modelâs reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.
Investigating chain-of-thought reasoning
Chain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1.Â
However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.
Current remedies for verifying CoT fall into two main categories. âBlack-boxâ approaches analyze the final generated token or the confidence scores of different token options. âGray-boxâ approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations.Â
But while these methods can detect that a modelâs internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.
A white-box approach to verification
CRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the modelâs reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.
To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.
With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a âdiagnostic classifierâ model is trained on these fingerprints to predict whether the reasoning step is correct or not.
At inference time, the classifier monitors the activations of the model and provides feedback on whether the modelâs reasoning trace is on the right track.
Finding and fixing errors
The researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.
The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.
Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).
The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly.Â
This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, âthese findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.â To support further research, the team plans to release its datasets and trained transcoders to the public.
Why itâs important
While CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.
This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of c…
Konten dipersingkat otomatis.
đ Sumber: venturebeat.com
đ MAROKO133 Eksklusif ai: Chinaâs âarmorâ coating promises 4,500 hours of stabilit
For years, engineers and automakers have looked to solid-state batteries as the key to unlocking safer, more powerful, and longer-lasting electric vehicles. Unlike conventional lithium-ion cells, they use solid electrolytes instead of flammable liquids, reducing the risk of overheating or fires.
However, despite their promise, these batteries have struggled to reach large-scale production due to their tendency to crack and degrade when exposed to rapid charging or extreme cold.
Now, researchers from Tsinghua Universityâs Shenzhen International Graduate School and Tianjin University have developed a âflexible armorâ layer that reinforces the batteryâs surface, helping it endure stress and maintain performance even in freezing conditions.
New silver-based coating boosts solid-state battery durability
Professors Feiyu Kang and Yanbing He from Tsinghua Universityâs Shenzhen International Graduate School, together with Professor Quanhong Yang from Tianjin University, have taken an unconventional approach to improving solid-state batteries – by making them more flexible instead of harder.
One of the key weaknesses in these batteries lies in the solid electrolyte interphase (SEI), a protective layer that is typically hard but brittle, and prone to cracking under stress. Once fractured, it causes uneven lithium buildup, leading to faster degradation and shorter battery life.
Using silver-based materials AgâS and AgF, the researchers developed a flexible SEI layer that can bend slightly without breaking, maintaining a stable structure while still allowing lithium ions to move smoothly. During laboratory tests, the teamâs redesigned solid-state batteries showed remarkable endurance and stability, far surpassing conventional designs.
When equipped with the new flexible coating, the batteries operated continuously for more than 4,500 hours under high-stress conditions – a level of durability that demonstrates the coatingâs ability to resist cracking and degradation over time. Even more impressively, they maintained stable performance for over 7,000 hours at â30°C, temperatures that would normally cripple or destroy most solid-state cells.
Mimicking nature to strengthen solid-state batteries
The researchers achieved their breakthrough by engineering a gradual, layered structure that blends both soft and stiff materials – a design inspired by how nature balances flexibility and strength in biological systems such as shells or tendons.
This hybrid architecture helps distribute mechanical stress more evenly across the battery, preventing cracks and structural failure while keeping lithium ions uniformly spread during charging and discharging. The result is a more stable and durable solid-state battery that can endure demanding conditions without losing efficiency.
Still, the scientists emphasize that this innovation is not an immediate ticket to mass production. Manufacturing solid-state batteries at scale remains a complex challenge, requiring further advances in materials processing, cost reduction, and integration with existing battery systems.
Nevertheless, the study marks a promising step toward making solid-state batteries more reliable and practical for real-world use, from EVs to renewable energy storage, particularly in extreme environments.
đ Sumber: interestingengineering.com
đ¤ Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
â Update berikutnya dalam 30 menit â tema random menanti!