📌 MAROKO133 Update ai: The Entire State of Maine Is Poised to Ban New Data Centers
Data centers designed to power immensely resource-intensive AI models have turned into a major point of contention. The sweeping facilities have proven immensely unpopular, particularly in rural areas, where they’ve been accused of causing electricity prices to spike.
Companies are spending billions of dollars on the computing infrastructure, despite ongoing concerns over environmental effects and power grid stability.
Now, new legislation in Maine is expected to freeze the construction of new data centers that consume at least 20 megawatts — enough to power about 15,000 homes — in the state until at least November 2027, as the Wall Street Journal reports, pending environmental and grid assessments.
The bill was passed by the Maine House of Representatives last month and is expected to pass in the Senate as well, which would make Maine the first state in the country to ban new data centers. The unprecedented move highlights growing bipartisan political fallout over the AI hype and consequent construction boom.
The political momentum shouldn’t be too surprising. As Heatmap News points out, Maine has already seen electricity prices surge almost 60 percent between 2021 and 2026. Major data centers could place an additional strain on the grid and potentially add to the problem.
The bill will also likely be closely watched by ten other states that are also weighing similar policies, per the WSJ. Other states, including New York, South Carolina, and Oklahoma, have already introduced similar measures.
“I think Maine is the canary in the coal mine,” Associated Builders and Contractors economist Anirban Basu told the WSJ. “Maine will be the first of many states to have such moratoria.”
The news comes two weeks after senator Bernie Sanders (D-VT) and representative Alexandria Ocasio-Cortez (D-NY) introduced a new policy aimed at curbing new AI data center construction, a subject they say is “affecting everything from our economy and wellbeing to our democracy, warfare and our kids’ education.”
Over 200 environmental groups have urged Congress to do the same.
The growing backlash over the AI-fueled data center frenzy will more than likely prove extremely polarizing during this year’s midterm elections. As the Financial Times reports, major tech industries are backing lobbying groups with hundreds of millions of dollars to sway public opinion on AI regulation.
Meanwhile, legislators on both sides of the aisle are waking up to a new “political reality.”
“There is a very strong voter fear of data centers and AI,” legal firm Preti Flaherty climate and energy attorney Tony Buxton told the WSJ.
More on data centers: Groups Set Up to Shill AI and Data Centers Are Pouring Huge Sums of Money Into the Midterm Elections
The post The Entire State of Maine Is Poised to Ban New Data Centers appeared first on Futurism.
🔗 Sumber: futurism.com
📌 MAROKO133 Update ai: Researchers from PSU and Duke introduce “Multi-Agent System
Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 2M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas.
Meet the author
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.
In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
Â
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
Â
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.
Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
Â
Â
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.
Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
Â
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”
Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the
2. failure-responsible agent and the decisive error step that led to the task’s failure.
Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.
3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
Â
Experimental Results and Key Findings
Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:
- A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
- No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
- Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.
- State-of-the-Art Models Struggle: Surprisingly, even the most advanced reasoning m…
Konten dipersingkat otomatis.
🔗 Sumber: syncedreview.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!