📌 MAROKO133 Update ai: 7 ‘secret’ systems that make humanoid robots think, walk an
Humanoid robots have enthused public imagination for decades, from Honda’s ASIMO to Tesla’s Optimus and Agility Robotics’ Digit.
However, what makes these machines deeply human is what lies under the hood—an intricate combination of actuation, control, sensing, and system integration that’s as much biology-inspired as it is mechanical engineering.
In reality, they cannot be deemed secrets per se, but engineering challenges that have been widely studied in labs and R&D departments.
However, behind the usual technologies that come to light and are credited with running the robots, there are certain advances that often go underreported.
Let’s explore these seven engineering secrets—the tech working behind the scenes to make humanoids behave like humans.
1. Advanced actuator technology
Think of actuators as the robot’s muscles —the technology that enables humanoids to move. Actuators come in three types: electric, hydraulic, or hybrid. Electric ones are neat and precise, while hydraulic ones facilitate larger, more powerful movements.
However, scientists are now trying musculoskeletal systems, which work more like real human bodies. Instead of using one motor per joint, they utilize tendons and cables, much like our muscles and ligaments.
These tendons pull and release to enable smooth movement, distribute force evenly, and make the robot more flexible and natural. Imagine how your arm feels springy when you throw a ball. That’s what these systems let robots do.
In simple words, older robots move like machines, but new robots behave more human-like.
2. Balance control systems
Walking on two legs is a mammoth challenge for robots. The Zero Moment Point (ZMP) method is the foundational technology to master. The concept aims to keep the resultant force vector within the support polygon of the robot’s feet, preventing tipping.
Modern humanoids now combine ZMP with Center-of-Mass (CoM) control, whole-body optimization, and reactive balance strategies. For instance, the Atlas robot continuously adjusts limb trajectories to maintain balance under external disturbances.
The technology involves using other parts of the body, such as the shoulders, arms, or back, to maintain balance. This concept is known as multi-contact balance, which enables a robot to lean, roll, or brace itself like a human.
3. Sophisticated sensor integration
About 80% of what humans sense comes through their eyes – and robots aren’t far behind in catching up with that ability. For instance, Tesla’s Optimus robot utilizes eight cameras, just like Tesla cars, to perceive the world from different angles.
However, robots aren’t dependent merely on vision. They utilize sensors such as IMUs, which measure movement and balance, and joint encoders that track the movements of the joints.
Researchers are now trying out sensor simplification through learning – the idea of having robots estimate external forces using only IMUs and motion data. This strategy enables humanoids to be cheaper and lighter without compromising balance or awareness.
4. Gripper dexterity and hand design
Human hands have 27 degrees of freedom, 130,000 sensors, and unparalleled adaptability. Replicating that complexity is no easy task. Tesla’s Optimus now features 22 degrees of freedom per hand, while the TESOLLO DG-5F grupper uses tendon routing and compact reduction gears.
A lesser-known innovation is tendon length-based self-estimation, where robots sense joint angles solely through tendon stretch, rather than relying on motor encoders. This biologically inspired sensing mirrors how human muscles detect tension, paving the way for simpler yet smarter hands.
5. Real-time motion planning
For robots that work with people, they need to know where to step or reach without bumping into anything or losing balance. Earlier, robots would calculate movements for all their joints at once, taking time to allow for real-time reactions.
Now, researchers are using joint-decoupling optimization, which breaks the movement problem into smaller, faster calculations of each part of the body. These calculations help robots think in layers.
Hierarchical systems separate high-level decisions (“walk around that obstacle”) from low-level control (“move left ankle 3°”). This innovation enables humanoids like Digit to replan movements in milliseconds, allowing for responsive collaboration with humans.
6. Compliance control
Robots require compliance features that allow them to bend or yield when pushed, rather than being stiff. Older robots used systems that would adjust the tightness or looseness of the joints.
Current humanoid robots use Series Elastic Actuators (SEAs). These actuators are motors that incorporate springs. Acting like shock absorbers, they help the robot safely handle sudden hits or pressure.
The next aim is to achieve active compliance learning, where the humanoid robot becomes capable of recognizing the difference between its own movements and external forces.
Researchers are also creating soft robots, integrating compliant skins and deformable joints directly into robotic structures.
7. Energy management
Despite breathtaking demonstrations, most humanoids can only run for one or two hours on a single charge. The likes of Tesla Optimus use a <a href="https://interestingengineering.com/entertainment/top-humanoid-robots-list" rel="dofollow…
Konten dipersingkat otomatis.
🔗 Sumber: interestingengineering.com
📌 MAROKO133 Update ai: Vibe coding platform Cursor releases first in-house LLM, Co
The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding large language model (LLM) as part of its Cursor 2.0 platform update.
Composer is designed to execute coding tasks quickly and accurately in production-scale environments, representing a new step in AI-assisted programming. It's already being used by Cursor’s own engineering staff in day-to-day development — indicating maturity and stability.
According to Cursor, Composer completes most interactions in less than 30 seconds while maintaining a high level of reasoning ability across large and complex codebases.
The model is described as four times faster than similarly intelligent systems and is trained for “agentic” workflows—where autonomous coding agents plan, write, test, and review code collaboratively.
Previously, Cursor supported "vibe coding" — using AI to write or complete code based on natural language instructions from a user, even someone untrained in development — atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These options are still available to users.
Benchmark Results
Composer’s capabilities are benchmarked using "Cursor Bench," an internal evaluation suite derived from real developer agent requests. The benchmark measures not just correctness, but also the model’s adherence to existing abstractions, style conventions, and engineering practices.
On this benchmark, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second — about twice as fast as leading fast-inference models and four times faster than comparable frontier systems.
Cursor’s published comparison groups models into several categories: “Best Open” (e.g., Qwen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest model available midyear), and “Best Frontier” (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes.
A Model Built with Reinforcement Learning and Mixture-of-Experts Architecture
Research scientist Sasha Rush of Cursor provided insight into the model’s development in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model:
“We used RL to train a big MoE model to be really good at real-world coding, and also very fast.”
Rush explained that the team co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale:
“Unlike other ML systems, you can’t abstract much from the full-scale system. We co-designed this project and Cursor together in order to allow running the agent at the necessary scale.”
Composer was trained on real software engineering tasks rather than static datasets. During training, the model operated inside full codebases using a suite of production tools—including file editing, semantic search, and terminal commands—to solve complex engineering problems. Each training iteration involved solving a concrete challenge, such as producing a code edit, drafting a plan, or generating a targeted explanation.
The reinforcement loop optimized both correctness and efficiency. Composer learned to make effective tool choices, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously.
This design enables Composer to work within the same runtime context as the end-user, making it more aligned with real-world coding conditions—handling version control, dependency management, and iterative testing.
From Prototype to Production
Composer’s development followed an earlier internal prototype known as Cheetah, which Cursor used to explore low-latency inference for coding tasks.
“Cheetah was the v0 of this model primarily to test speed,” Rush said on X. “Our metrics say it [Composer] is the same speed, but much, much smarter.”
Cheetah’s success at reducing latency helped Cursor identify speed as a key factor in developer trust and usability.
Composer maintains that responsiveness while significantly improving reasoning and task generalization.
Developers who used Cheetah during early testing noted that its speed changed how they worked. One user commented that it was “so fast that I can stay in the loop when working with it.”
Composer retains that speed but extends capability to multi-step coding, refactoring, and testing tasks.
Integration with Cursor 2.0
Composer is fully integrated into Cursor 2.0, a major update to the company’s agentic development environment.
The platform introduces a multi-agent interface, allowing up to eight agents to run in parallel, each in an isolated workspace using git worktrees or remote machines.
Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output.
Cursor 2.0 also includes supporting features that enhance Composer’s effectiveness:
- 
In-Editor Browser (GA) – enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model. 
- 
Improved Code Review – aggregates diffs across multiple files for faster inspection of model-generated changes. 
- 
Sandboxed Terminals (GA) – isolate agent-run shell commands for secure local execution. 
- 
Voice Mode – adds speech-to-text controls for initiating or managing agent sessions. 
While these platform updates expand the overall Cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding.
Infrastructure and Training Systems
To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs.
The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead.
This configuration allows Cursor to train models natively at low precision without requiring post-training quantization, improving both inference speed and efficiency.
Composer’s training relied on hundreds of thousands of concurrent sandboxed environments—each a self-contained coding workspace—running in the cloud. The company adapted its Background Agents infrastructure to schedule these virtual machines dynamically, supporting the bursty nature of large RL runs.
Enterprise Use
Composer’s performance improvements are supported by infrastructure-level changes across Cursor’s code intelligence stack.
The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates.
Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tier…
Konten dipersingkat otomatis.
🔗 Sumber: venturebeat.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!
