MAROKO133 Breaking ai: How Hud's runtime sensor cut triage time from 3 hours to 10 mi

📌 MAROKO133 Breaking ai: How Hud's runtime sensor cut triage time from 3 hour

Engineering teams are generating more code with AI agents than ever before. But they're hitting a wall when that code reaches production.

The problem isn't necessarily the AI-generated code itself. It's that traditional monitoring tools generally struggle to provide the granular, function-level data AI agents need to understand how code actually behaves in complex production environments. Without that context, agents can't detect issues or generate fixes that account for production reality.

It's a challenge that startup Hud is looking to help solve with the launch of its runtime code sensor on Wednesday. The company's eponymous sensor runs alongside production code, automatically tracking how every function behaves, giving developers a heads-up on what's actually occurring in deployment.

"Every software team building at scale faces the same fundamental challenge: building high-quality products that work well in the real world," Roee Adler, CEO and founder of Hud, told VentureBeat in an exclusive interview.  "In the new era of AI-accelerated development, not knowing how code behaves in production becomes an even bigger part of that challenge."

What software developers are struggling with 

The pain points that developers are facing are fairly consistent across engineering organizations. Moshik Eilon, group tech lead at Monday.com, oversees 130 engineer and describes a familiar frustration with traditional monitoring tools.

"When you get an alert, you usually end up checking an endpoint that has an error rate or high latency, and you want to drill down to see the downstream dependencies," Eilon told VentureBeat. "A lot of times it's the actual application, and then it's a black box. You just get 80% downstream latency on the application."

The next step typically involves manual detective work across multiple tools. Check the logs. Correlate timestamps. Try to reconstruct what the application was doing. For novel issues deep in a large codebase, teams often lack the exact data they need.

Daniel Marashlian, CTO and co-founder at Drata, saw his engineers spending hours on what he referred to as an "investigation tax." "They were mapping a generic alert to a specific code owner, then digging through logs to reconstruct the state of the application," Marashlian told VentureBeat. "We wanted to eliminate that so our team could focus entirely on the fix rather than the discovery."

Drata's architecture compounds the challenge. The company integrates with numerous external services to deliver automated compliance, which creates sophisticated investigations when issues arise. Engineers trace behavior across a very large codebase spanning risk, compliance, integrations, and reporting modules.

Marashlian identified three specific problems that drove Drata toward investing in runtime sensors. The first issue was the cost of context switching. 

"Our data was scattered, so our engineers had to act as human bridges between disconnected tools," he said.

The second issue, he noted, is alert fatigue. "When you have a complex distributed system, general alert channels become a constant stream of background noise, what our team describes as a 'ding, ding, ding' effect that eventually gets ignored," Marashlian said.

The third key driver was a need to integrate with the company's AI strategy.

"An AI agent can write code, but it cannot fix a production bug if it can't see the runtime variables or the root cause," Marashlian said.

Why traditional APMs can't solve the problem easily

Enterprises have long relied on a class of tools and services known as Application Performance Monitoring (APM). 

With the current pace of agentic AI development and modern development workflows, both Monday.com and Drata simply were not able to get the required visibility from existing APM tools.

"If I would want to get this information from Datadog or from CoreLogix, I would just have to ingest tons of logs or tons of spans, and I would pay a lot of money," Eilon said. 

Eilon noted that Monday.com used very low sampling rates because of cost constraints. That meant they often missed the exact data needed to debug issues.

Traditional application performance monitoring tools also require prediction, which is a problem because sometimes a developer just doesn't know what they don't know.

"Traditional observability requires you to anticipate what you'll need to debug," Marashlian said. "But when a novel issue surfaces, especially deep within a large, complex codebase, you're often missing the exact data you need."

Drata evaluated several solutions in the AI site reliability engineering and automated incident response categories and didn't find what was needed. 

 "Most tools we evaluated were excellent at managing the incident process, routing tickets, summarizing Slack threads, or correlating graphs," he said. "But they often stopped short of the code itself. They could tell us 'Service A is down,' but they couldn't tell us why specifically."

Another common capability in some tools including error monitors like Sentry is the ability to capture exceptions. The challenge, according to Adler, is that being made aware of exceptions is nice, but that doesn't connect them to business impact or provide the execution context AI agents need to propose fixes.

How runtime sensors work differently

Runtime sensors push intelligence to the edge where code executes. Hud's sensor runs as an SDK that integrates with a single line of code. It sees every function execution but only sends lightweight aggregate data unless something goes wrong.

When errors or slowdowns occur, the sensor automatically gathers deep forensic data including HTTP parameters, database queries and responses, and full execution context. The system establishes performance baselines within a day and can alert on both dramatic slowdowns and outliers that percentile-based monitoring misses.

"Now we just get all of this information for all of the functions regardless of what level they are, even for underlying packages," Eilon said. "Sometimes you might have an issue that is very deep, and we still see it pretty fast."

The platform delivers data through four channels:

  • Web application for centralized monitoring and analysis

  • IDE extensions for VS Code, JetBrains and Cursor that surface production metrics directly where code is written

  • MCP server that feeds structured data to AI coding agents

  • Alerting system that identifies issues without manual configuration

The MCP server integration is critical for AI-assisted development. Monday.com engineers now query production behavior directly within Cursor. 

"I can just ask Cursor a question: Hey, why is this endpoint slow?" Eilon said. "When it uses the Hud MCP, I get all of the granular metrics, and this function is 30% slower since this deployment. Then I can also find the root cause."

This changes the incident response workflow. Instead of starting in Datadog and drilling down through layers, engineers start by asking an AI agent to diagnose the issue. The agent has immediate access to function-level production data.

From voodoo incidents to minutes-long fixes

The shift from theoretical capability to practical impact becomes clear in how engineering teams actually use runtime sensors. What used to take hours or days of detective work now resolves in min…

Konten dipersingkat otomatis.

🔗 Sumber: venturebeat.com


📌 MAROKO133 Breaking ai: King Gizzard Responds to Being Impersonated by AI on Spot

Acclaimed Australian prog rock band King Gizzard & the Lizard Wizard made headlines earlier this year when it quit Spotify, protesting the platform’s CEO, Daniel Ek, who heavily invested in an AI weapons company.

The band was one of several music acts to pull their music from Spotify over ethical concerns. Many of them have taken issue with artists earning very little money per stream on the platform, or the company donating a sizable sum to president Donald Trump’s inauguration ceremony.

Next, something extremely dark happened: an impostor created a band on Spotify with the extremely similar band name of “King Lizard Wizard” and used AI to generate songs with the same titles as actual King Gizzard songs that ripped off their entire lyrics and sound, accumulating tens of thousands of streams while remaining on the streaming service for weeks without detection.

Outspoken King Gizzard & The Lizard Wizard frontman Stu Mackenzie has now excoriated the platform after finding out about the ruse.

“[I’m] trying to see the irony in this situation,” he said in a statement quoted by The Music. “But seriously wtf we are truly doomed.”

Spotify has since pulled down the offending material, with a spokesperson telling Futurism in a statement that it “strictly prohibits any form of artist impersonation.”

“The content in question was removed for violating our platform policies, and no royalties were paid out for any streams generated,” the spokesperson added.

But the company’s reactive cat-and-mouse game isn’t exactly assuring artists, given Mackenzie’s reaction.

The incident highlights how Spotify is seriously struggling to keep AI slop at bay on its platform. While the company announced new policies to protect artists against “spam, impersonation, and deception” in September, we continue to see offending AI impersonations landing in users’ Release Radar and Discover Weekly playlists, which the company prominently recommends to them.

Worse yet, as  Platfomer reported last month, a separate King Gizzard impersonator had previously attempted to cash in on the band’s royalties using AI — meaning that if there was one band that Spotify should have been manually screening for impostors, it should have been King Gizzard.

In short, Spotify has a major PR headache to clean up as it reels from an onslaught of AI slop.

And a growing number of artists, including King Gizzard, have finally had enough and are looking for greener pastures. Who could blame them?

More on the incident: King Gizzard Pulled Their Music From Spotify in Protest, and Now Spotify Is Hosting AI Knockoffs of Their Songs

The post King Gizzard Responds to Being Impersonated by AI on Spotify appeared first on Futurism.

🔗 Sumber: futurism.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna