📌 MAROKO133 Update ai: Europe’s biggest vanadium battery goes live in Spain with 8
Spain has recently wrapped up operational testing of the biggest vanadium flow battery for applied research on the old continent, thus marking a turning point for sustainable, long-duration energy storage.
The testing, carried out at the technology center in Cubillos del Sil, northwestern Spain, was completed by the government-backed research institution Fundación Ciudad de la Energía (Ciuden).
Established in 2006, the foundation validated a 1 MW/8 MWh vanadium redox flow battery (VRFB) system, capable of delivering one megawatt of power and storing eight megawatt-hours of energy.
As per Cuiden, the installation is designed not just to store energy, but to serve as an experimental platform for advanced storage technologies. “This 1MW, 8MWh energy storage system includes a 100kW/800kWh experimental module that will allow for various R&D tests,” the foundation reported.
Inside the VRFB system
Vanadium redox flow batteries are long-duration, rechargeable energy storage systems that use vanadium ions in liquid electrolytes to store power in external tanks rather than solid electrodes.
Compared to traditional lithium-ion (Li-ion) technologies, the new system can deliver power for more than 15 hours.
This reportedly makes it the longest-duration battery currently available in Spain for experimental research. According to Cuiden, the project is part of a broader effort to build a hybrid energy testbed that brings together several technologies.
Apart from the vanadium system, the site hosts a one-megawatt, 5.8 megawatt-hour (1 MW/5.8 MWh) sodium-sulfur battery and a 600-kilowatt, 1.3 megawatt-hour (600 kW/1.3 MWh) lithium-ion system.
Paired with a 2.2 megawatt (MW) solar installation, the setup offers nearly 15 megawatt-hours (MWh) of storage capacity. It can store the plant’s full daily output during peak generation periods.
“The contract, worth EUR 6,4 million [USD 7.4 million] was awarded to the Spanish company CYMI and incorporates South Korean technology from H2 Inc,” Cuiden said in a press release.
Experimental storage hub
To store energy, the vanadium system uses liquid electrolytes with vanadium ions in different oxidation states. These liquids are stored in external tanks and allow the system’s energy capacity and power output to be scaled independently.
The design delivers greater durability than conventional systems. “This gives it a long lifespan of over 20 years and allows for power-energy decoupling, making it easy to increase storage capacity,” the foundation revealed.
The facility is also exploring how storage technologies interact with emerging hydrogen systems. For this purpose, the team has integrated two electrolyzers, including a 300-kilowatt proton exchange membrane unit and a 250-kilowatt solid oxide electrolyzer (SOEC), into the site.
The combination is expected to deliver a unique testing environment where solar power, batteries, as well as hydrogen production can be studied together.
The project is funded under the NextGenerationEU recovery program, which is a temporary pandemic recovery instrument made to rebuild a greener, digital, and more resilient European Union. It is part of Spain’s broader plan to modernize its energy infrastructure.
“[The initiative] aims to obtain technical data for the industrial-scale development of the various technologies that allow the extrapolation of their optimal operating conditions, thus promoting the decarbonization of industry,” Ciuden concluded.
🔗 Sumber: interestingengineering.com
📌 MAROKO133 Eksklusif ai: Which Agent Causes Task Failures and When?Researchers fr
Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Contact us: [email protected]
Meet the authors
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.
In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.
Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.
Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”
Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the failure-responsible agent and the decisive error step that led to the task’s failure.
2. Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.
3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
– All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
– Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
– Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
Experimental Results and Key Findings
Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:
– A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
– No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
– Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.
– State-of-the-Art Models Struggle: Surprisingly, even the most advanced reasoning models, like OpenAI o1 and DeepSeek R1, find this task challenging.- This h…
Konten dipersingkat otomatis.
🔗 Sumber: syncedreview.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!
