📌 MAROKO133 Eksklusif ai: Meta researchers open the LLM black box to repair flawed
Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational errors as the model solves a problem.
Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a model’s faulty reasoning on the fly.
The technique could help solve one of the great challenges of AI: Ensuring a model’s reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.
Investigating chain-of-thought reasoning
Chain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1.
However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.
Current remedies for verifying CoT fall into two main categories. “Black-box” approaches analyze the final generated token or the confidence scores of different token options. “Gray-box” approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations.
But while these methods can detect that a model’s internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.
A white-box approach to verification
CRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the model’s reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.
To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.
With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a “diagnostic classifier” model is trained on these fingerprints to predict whether the reasoning step is correct or not.
At inference time, the classifier monitors the activations of the model and provides feedback on whether the model’s reasoning trace is on the right track.
Finding and fixing errors
The researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.
The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.
Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).
The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly.
This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, “these findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.” To support further research, the team plans to release its datasets and trained transcoders to the public.
Why it’s important
While CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.
This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of c…
Konten dipersingkat otomatis.
🔗 Sumber: venturebeat.com
📌 MAROKO133 Update ai: Harvard’s new textile performs better in wind speeds by red
Researchers have developed a new type of textile that can adjust its aerodynamic properties while worn on the body.
Developed by researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), the innovation has the potential to change not only high-speed sports, but also industries like aerospace, maritime, and civil engineering.
The research team revealed that the new type of textile uses dimpling to adjust its aerodynamic properties.
Innovation can change industries like aerospace, maritime, and civil engineering
The innovation is claimed to have the potential to change not only high-speed sports, but also industries like aerospace, maritime, and civil engineering.
“By performing 3,000 simulations, we were able to explore thousands of dimpling patterns,” said SEAS mechanical engineering graduate student David Farrel.
“We were able to tune how big the dimple is, as well as its form. When we put these patterns back in the wind tunnel, we find that certain patterns and dimples are optimized for specific wind-speed regions.”
Unique textile forms dimples on its surface when stretched
The unique textile forms dimples on its surface when stretched, even when tightly fitted around a person’s body. The fabrics utilize the same aerodynamic principles as a golf ball, whose dimpled surface causes a ball to fly further by using turbulence to reduce drag. Because the fabric is soft and elastic, it can move and stretch to change the size and shape of the dimples on demand.
The research team also underlined that adjusting dimple sizes can make the fabric perform better in certain wind speeds by reducing drag by up to 20%, according to the researchers’ experiments using a wind tunnel.
Researchers used a laser cutter and heat press to create a dual-toned fabric made of a stiffer black woven material, similar to a backpack strap, and a gray, softer knit that’s flexible and comfortable.
Using a two-step manufacturing process, they cut patterns into the woven fabric and sealed it together with the knit layer to form a textile composite. Experimenting with multiple flat samples patterned in lattices like squares and hexagons, they systematically explored how different tessellations affect the mechanical response of each textile material, according to a press release.
Textile composite’s on-demand dimpling
The textile composite’s on-demand dimpling is the result of a lattice pattern. Stretch a traditional textile onto the body, and it will smooth out and tighten. While this textile breaks the rule and its unique lattice pattern allows the textile to expand around the arm rather than clamp down, according to researchers.
Published in Advanced Materials, the paper reveals that the textile metamaterial is introduced that is capable of variable aerodynamic profiles through a stretch-induced dimpling mechanism, even when tightly conformed to a body or object.
“Wind-tunnel experiments are used to characterize the variable aerodynamic performance of the dimpling mechanism, while Finite Element (FE) simulations efficiently characterize the design space to identify optimal textile metamaterial architectures,” said researchers in the study.
“By controlling dimple size, the aerodynamic performance of the textile can be tailored for specific wind-speed ranges, resulting in an ability to modulate drag force at target wind-speeds by up to 20%.”
🔗 Sumber: interestingengineering.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!