MAROKO133 Hot ai: US begins 6-year RHIC overhaul to build world’s only polarized electron-

📌 MAROKO133 Update ai: US begins 6-year RHIC overhaul to build world’s only polari

Engineers in the US have recently started dismantling the Relativistic Heavy Ion Collider (RHIC) in order to repurpose its infrastructure and build the world’s only fully polarized Electron-Ion Collider (EIC).

In February 2026, the RHIC, which was the world’s second-highest-energy heavy-ion collider, completed its 25th and final run at the US Department of Energy’s (DOE) Brookhaven National Laboratory (BNL).

The milestone signaled the beginning of a six-year-transformation to build what will become the world’s only fully polarized electron-ion collider (EIC), a type of particle accelerator designed to probe the deepest structure of matter. The EIC is supported by the Thomas Jefferson National Accelerator Facility.

Abhay Deshpande, PhD, BNL associate laboratory director for nuclear and particle physics and science director for the EIC, noted the EIC marks a major step forward in exploring matter and the strongest force in nature. “Even as one chapter ends, we are excited about what is to come,” Deshpande added.

From RHIC to EIC

The EIC will be built by reusing major components of RHIC, including its 2.4-mile-circumference tunnel, one of its superconducting-magnet ion storage rings, and other accelerator and detector equipment.

This approach is expected to deliver significant cost savings while speeding up construction timelines. The team will require a substantial amount of space to removed, repurposed and store roughly 6,500 components.

“The start of RHIC equipment removal marks a major step toward building the EIC and continuing RHIC’s tradition of groundbreaking discovery,” Jim Yeck, project director at EIC, pointed out.

Some major components of the Relativistic Heavy Ion Collider (RHIC, left) will be reused for the Electron-Ion Collider (EIC, right).
Credit: Valerie A. Lentz / Brookhaven National Laboratory

In order to make space for a new electron storage ring, one of RHIC’s two rings of superconducting magnets will be dismantled entirely. The engineering team will additionally remove the cryogenic systems, radiofrequency equipment, as well as other accelerator components that are no longer needed.

The process will unfold sector by sector around the ring, allowing installation of new EIC systems to begin even as teardown continues. At the same time, RHIC’s massive detectors, some weighing tens of tons, will also be dismantled.

“We have an excellent group of highly skilled engineers, technicians, riggers, and safety personnel with the technical skills and experience needed to carry out these challenging tasks,” Raymond Fliller, PhD, head of the Environmental Safety, Security, Health and Quality Assurance Division within BNL, said.

Rebuilding a collider

The engineers will also take apart the long-running STAR detector, a tracker that captured RHIC’s first collisions in 2000, as well as the newer sPHENIX detector. A new detector, ePIC, will be built, in their place, to analyze collisions at the EIC.

“Also to be removed are 30 solid steel plates leftover from PHENIX, one of RHIC’s original detectors, which was located where sPHENIX now sits,” BNL said. “The 15 plates on each side of the interaction region are each made of five sections that are four or eight inches thick, weighing in at 40 or 80 tons, respectively.”

Meanwhile, parts of sPHENIX, a state-of-the-art particle detector, like its outer hadronic calorimeter, will be reused in the new detector. This reflects a broader effort to reuse high-value components wherever possible.

Group photo from STAR collaboration meeting held March 2-6.
Credit: Kevin Coughlin / Brookhaven National Laboratory

BNL said that smaller systems, like power supplies, housings, and electronics, are being cataloged for reuse, recycling, or redistribution to partner institutions.

“We are excited to be preparing for the future installation of the EIC,” David Chan, head of the Infrastructure and Technical Support Division within BNL’s Collider-Accelerator Department, said in a press release.

“The start of equipment removal is one of the first steps in building the EIC within the existing facility and marks an important milestone in this process,” he added.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Update ai: Researchers from PSU and Duke introduce “Multi-Agent System

Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 2M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. 

Meet the author
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.

In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
 
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
 
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.

Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
 
 
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.

Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
 
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”

Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the

2. failure-responsible agent and the decisive error step that led to the task’s failure.

Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.

3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
 
Experimental Results and Key Findings

Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:

  • A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
  • No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
  • Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.

🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna