MAROKO133 Hot ai: New passive ultrasonic sensors track motion without electricity or batte

📌 MAROKO133 Hot ai: New passive ultrasonic sensors track motion without electricit

Researchers at the Georgia Institute of Technology have built a motion and contact sensor that requires no external power source yet can detect movement by generating its own ultrasound signals.

The passive device works by converting mechanical energy from physical contact or nearby motion directly into ultrasonic pulses. When an object touches or moves near the sensor, the resulting mechanical deformation drives a piezoelectric element, which generates a high-frequency acoustic signal that can be captured and interpreted by a receiver. No battery, no wired power supply — the motion itself is the energy source.

How the unpowered mechanism works

Piezoelectric materials produce an electric charge when compressed or bent. The Georgia Tech design routes that charge into a transducer that emits ultrasound rather than simply logging a voltage spike. The ultrasonic output can travel through solid media, making the sensor readable even when embedded inside walls, packaging, or machinery without line-of-sight access.

The frequency and amplitude of the emitted pulse carry information about the contact event — its force, duration, and, with array configurations, its location. This moves the device beyond a simple binary trigger into something closer to a characterization tool, according to the research team’s description of the work.

Because the sensor harvests energy entirely from the event it is measuring, quiescent power draw is effectively zero. That property distinguishes it from conventional ultrasonic proximity sensors, which must continuously power a transmitter and wait for a reflected echo.

Where passively generated ultrasound has an edge

Battery-free operation has obvious appeal in environments where maintenance access is difficult or where large numbers of sensors need to be deployed economically. Structural health monitoring — embedding sensors inside concrete, composite panels, or pipeline walls to detect cracks or impacts — is a natural fit. Industrial machinery could carry arrays of these sensors to log contact events across thousands of surface points without the wiring overhead that active sensor networks demand.

Medical device packaging and pharmaceutical logistics are other plausible use cases, where tamper detection or impact logging must work across a product’s entire shelf life without a power source that could degrade or require regulatory clearance.

The approach also sidesteps radio-frequency interference concerns that affect wireless sensor nodes, since the communication medium is sound rather than an electromagnetic signal. In electromagnetically noisy industrial settings — motor rooms, MRI suites, high-voltage switchyards — that distinction matters practically, not just theoretically.

Constraints and open questions

Passive operation comes with trade-offs. The sensor cannot be polled on demand; it only reports when a mechanical event occurs. Applications that need continuous presence detection rather than event logging would still require powered alternatives. Read range for the ultrasonic signal is also constrained by material attenuation, meaning receiver placement demands careful engineering in thick or acoustically lossy substrates.

Crosstalk between adjacent sensors in a dense array — where one contact event generates sound that reaches a neighboring transducer — is a known challenge in ultrasonic sensing architectures. The Georgia Tech team has not yet published a full characterization of array-level interference at production densities, according to available information on the research.

The work adds to a growing body of passive sensing research that spans acoustic and optical domains. Engineers developing novel transducer-based input devices have been exploring bone-conduction and acoustic coupling mechanisms across consumer and industrial applications. Separately, researchers studying biological systems have drawn comparisons to the way animals exploit mechanical energy for sensory tasks without metabolic overhead. The parallel is approximate but the engineering principle — extracting information from ambient mechanical energy without a dedicated power budget — runs through both.

The full details of the Georgia Tech sensor appear in the team’s published research. Further work is expected to address miniaturization and the signal-processing pipeline needed to decode ultrasonic pulses in noisy real-world environments.

The sensor’s next test is integration into systems where the engineering constraints of deployment — not laboratory conditions — determine whether passive ultrasound generation can move from a proof of concept into a production-viable monitoring technology. Roman builders once solved structural problems with material ingenuity rather than added complexity; the same minimalist logic drives this approach to sensing.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Breaking ai: Researchers from PSU and Duke introduce “Multi-Agent Syst

Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 2M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. 

Meet the author
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.

In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
 
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
 
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.

Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
 
 
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.

Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
 
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”

Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the

2. failure-responsible agent and the decisive error step that led to the task’s failure.

Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.

3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
 
Experimental Results and Key Findings

Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:

  • A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
  • No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
  • Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.

🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna