π MAROKO133 Update ai: New robot AI predicts physical motion from video to guide m
Robotics startup Rhoda AI has emerged from stealth with a new approach to robot intelligence that it says can help machines operate reliably outside controlled laboratory settings.
The company also announced it has raised $450 million in Series A funding to scale its technology and expand industrial deployments.
The system, called FutureVision, is built on a model architecture that predicts how the physical world will change and then converts those predictions into robot actions.
Rhoda says the system can continuously observe its surroundings, forecast future states as video, act on those predictions, and repeat the process every few hundred milliseconds.
Industrial robots today typically rely on pre-programmed paths and perform best in tightly structured environments. Even newer AI approaches such as vision-language-action models often struggle when conditions change.
Unexpected objects, shifting layouts, or irregular workflows can cause robots to fail or require human intervention.
Rhoda says its approach addresses this limitation by training robot models on internet-scale video data before refining them with robot-specific learning.
Training robots on video
Instead of relying primarily on teleoperated robot demonstrations, Rhoda pre-trains its system using hundreds of millions of online videos.
According to the company, this allows the model to learn patterns of motion, physics, and physical interactions before it ever controls a robot.
The system is then fine-tuned with smaller amounts of real robot data so it can translate visual predictions into physical actions.
Rhoda says the resulting architecture, which it calls a Direct Video Action model, allows robots to adapt to changing conditions while they are working.
Unlike open-loop systems that generate a plan once and execute it without feedback, Rhodaβs model updates its actions continuously based on what it observes in the environment.
The company says this closed-loop process helps robots maintain accuracy when conditions shift.
The approach also reduces the amount of robot training data required. Rhoda says new tasks can often be learned with as little as ten hours of teleoperation data.
Robots beyond lab environments
The company says its technology has already been tested in production environments where robots must deal with constantly changing materials and workflows.
In one high-volume manufacturing evaluation, Rhoda reported that a robot system completed a component-processing workflow in under two minutes per cycle without human intervention, exceeding customer performance targets.
“We believe the next era of robotics requires models that understand how the world moves – not just what it looks like or how itβs described in language,” said Jagdeep Singh, cofounder and CEO of Rhoda.
“By learning from internet-scale video and operating in closed loop, our systems are designed to adapt to real-world variability in ways conventional approaches struggle to achieve. The goal is simple: robots that work in the real world, not just controlled lab settings.”
Investors say the technology could expand automation into areas that have historically been difficult for machines.
“In manufacturing, tasks with high variability have historically resisted automation. The real challenge isnβt solving it once, itβs delivering consistent, reliable output under real-world production conditions,” said Jens Wiese, managing partner at VC firm Leitmotif and former Volkswagen Group executive.
Rhoda says the new funding will support further research and engineering work, industrial pilots, and expansion of its robotics team.
The company says FutureVision will eventually serve as a foundation model that can be licensed to partners building robotic hardware and software platforms.
π Sumber: interestingengineering.com
π MAROKO133 Hot ai: Amazon Admits Extensive AI Use Is Wreaking Havoc on Its Core B
Businesses are learning the hard way that rapidly deploying AI tools β and forcing or strongly encouraging their employees to use them β can backfire severely.
The latest appears to be Amazon β though one can debate whether it’s taking away the right lessons. On Tuesday, the Financial Times reports, the ecommerce giant summoned a large group of engineers to a meeting addressing recent outages plaguing its online retail business, some of them related to AI coding tools.
In a meeting briefing note, the company described the “trend of incidents” as characterized by a “high blast radius” and “Gen-AI assisted changes.” As a “contributing factor,” the note listed “novel GenAI usage for which best practices and safeguards are not yet fully established.”
“Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at Amazon’s eCommerce Services, told employees in an email, per the FT.
The meeting follows a nearly six hour outage last week that took down Amazon’s shopping website and app, leaving customers unable to make orders. In the aftermath, the company blamed a botched “software code deployment.”
In another series of incidents at its cloud computing division, Amazon Web Services, two separate outages were caused after engineers allowed the company’s in-house AI coding tool to make disastrous changes, additional FT reporting revealed last month. In one case, the AI tool deleted and recreated the entire coding environment.
In response to the earlier reporting, Amazon framed these blunders as an issue related to its protocols around AI usage and “user access control,” rather than an AI autonomy issue, and it appears to be sticking to its guns. The company will not be backing away from deploying AI but is instead insisting on stronger guardrails and more oversight on how it’s used.
Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell said at the meeting, per the FT‘s reporting. Treadwell asked staff to attend the typically-optional meeting.
There’s no question that AI tools, if they should be used at all, should be closely supervised, especially in programming roles. Like any generative AI model, AI coding tools frequently allow errors through and sometimes struggle to follow instructions, meaning they can take actions that a user never intended.Β
But Amazon’s renewed focus on implementing more human oversight comes as it’s fired hundreds of workers from its cloud computing division and as it targets laying off 30,000 employees across its corporate workforce overall. Meanwhile, management leans on programmers to heavily use AI tools, with employees previously telling the FT that the company set a target for 80 percent of developers to use AI for coding tasks at least once a week.
In sum: more coding with more AI with more human oversight, but fewer humans. We’ll see how that works out.
More on AI: Insiders Afraid the Government Will Nationalize the AI Industry
The post Amazon Admits Extensive AI Use Is Wreaking Havoc on Its Core Business appeared first on Futurism.
π Sumber: futurism.com
π€ Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
β Update berikutnya dalam 30 menit β tema random menanti!
