π MAROKO133 Update ai: ByteDance Introduces Astra: A Dual-Model Architecture for A
The increasing integration of robots across various sectors, from industrial manufacturing to daily life, highlights a growing need for advanced navigation systems. However, contemporary robot navigation systems face significant challenges in diverse and complex indoor environments, exposing the limitations of traditional approaches. Addressing the fundamental questions of “Where am I?”, “Where am I going?”, and “How do I get there?”, ByteDance has developed Astra, an innovative dual-model architecture designed to overcome these traditional navigation bottlenecks and enable general-purpose mobile robots.
Traditional navigation systems typically consist of multiple, smaller, and often rule-based modules to handle the core challenges of target localization, self-localization, and path planning. Target localization involves understanding natural language or image cues to pinpoint a destination on a map. Self-localization requires a robot to determine its precise position within a map, especially challenging in repetitive environments like warehouses where traditional methods often rely on artificial landmarks (e.g., QR codes). Path planning further divides into global planning for rough route generation and local planning for real-time obstacle avoidance and reaching intermediate waypoints.
While foundation models have shown promise in integrating smaller models to tackle broader tasks, the optimal number of models and their effective integration for comprehensive navigation remained an open question.
ByteDance’s Astra, detailed in their paper “Astra: Toward General-Purpose Mobile Robots via Hierarchical Multimodal Learning” (website: https://astra-mobility.github.io/), addresses these limitations. Following the System 1/System 2 paradigm, Astra features two primary sub-models: Astra-Global and Astra-Local. Astra-Global handles low-frequency tasks like target and self-localization, while Astra-Local manages high-frequency tasks such as local path planning and odometry estimation. This architecture promises to revolutionize how robots navigate complex indoor spaces.
Astra-Global: The Intelligent Brain for Global Localization
Astra-Global serves as the intelligent core of the Astra architecture, responsible for critical low-frequency tasks: self-localization and target localization. It functions as a Multimodal Large Language Model (MLLM), adept at processing both visual and linguistic inputs to achieve precise global positioning within a map. Its strength lies in utilizing a hybrid topological-semantic graph as contextual input, allowing the model to accurately locate positions based on query images or text prompts.
The construction of this robust localization system begins with offline mapping. The research team developed an offline method to build a hybrid topological-semantic graph G=(V,E,L):
- V (Nodes): Keyframes, obtained by temporal downsampling of input video and SfM-estimated 6-Degrees-of-Freedom (DoF) camera poses, act as nodes encoding camera poses and landmark references.
- E (Edges): Undirected edges establish connectivity based on relative node poses, crucial for global path planning.
- L (Landmarks): Semantic landmark information is extracted by Astra-Global from visual data at each node, enriching the map’s semantic understanding. These landmarks store semantic attributes and are connected to multiple nodes via co-visibility relationships.
In practical localization, Astra-Global’s self-localization and target localization capabilities leverage a coarse-to-fine two-stage process for visual-language localization. The coarse stage analyzes input images and localization prompts, detects landmarks, establishes correspondence with a pre-built landmark map, and filters candidates based on visual consistency. The fine stage then uses the query image and coarse output to sample reference map nodes from the offline map, comparing their visual and positional information to directly output the predicted pose.
For language-based target localization, the model interprets natural language instructions, identifies relevant landmarks using their functional descriptions within the map, and then leverages landmark-to-node association mechanisms to locate relevant nodes, retrieving target images and 6-DoF poses.
To empower Astra-Global with robust localization abilities, the team employed a meticulous training methodology. Using Qwen2.5-VL as the backbone, they combined Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO). SFT involved diverse datasets for various tasks, including coarse and fine localization, co-visibility detection, and motion trend estimation. In the GRPO phase, a rule-based reward function (including format, landmark extraction, map matching, and extra landmark rewards) was used to train for visual-language localization. Experiments showed GRPO significantly improved Astra-Global’s zero-shot generalization, achieving 99.9% localization accuracy in unseen home environments, surpassing SFT-only methods.
Astra-Local: The Intelligent Assistant for Local Planning
Astra-Local acts as the intelligent assistant for Astra’s high-frequency tasks, a multi-task network capable of efficiently generating local paths and accurately estimating odometry from sensor data. Its architecture comprises three core components: a 4D spatio-temporal encoder, a planning head, and an odometry head.
The 4D spatio-temporal encoder replaces traditional mobile stack perception and prediction modules. It begins with a 3D spatial encoder that processes N omnidirectional images through a Vision Transformer (ViT) and Lift-Splat-Shoot to convert 2D image features into 3D voxel features. This 3D encoder is trained using self-supervised learning via 3D volumetric differentiable neural rendering. The 4D spatio-temporal encoder then builds upon the 3D encoder, taking past voxel features and future timestamps as input to predict future voxel features through ResNet and DiT modules, providing current and future environmental representations for planning and odometry.
The planning head, based on pre-trained 4D features, robot speed, and task information, generates executable trajectories using Transformer-based flow matching. To prevent collisions, the planning head incorporates a masked ESDF loss (Euclidean Signed Distance Field). This loss calculates the ESDF of a 3D occupancy map and applies a 2D ground truth trajectory mask, significantly reducing collision rates. Experiments demonstrate its superior performance in collision rate and overall score on out-of-distribution (OOD) datasets compared to other methods.
The odometry head predicts the robot’s relative pose using current and past 4D features and additional sensor data (e.g., IMU, wheel data). It trains a Transformer model to fuse information from different sensors. Each sensor modality is processed by a specific tokenizer, combined with modality embeddings and temporal positional embeddi…
Konten dipersingkat otomatis.
π Sumber: syncedreview.com
π MAROKO133 Update ai: Under Threat of Perjury, OpenAIβs Former CTO Is Admitting S
The bizarre and messy court battle between OpenAI CEO Sam Altman and former OpenAI investor Elon Musk trudges on. And this week, as revealed in court, OpenAI’s former Chief Technology Office had some extremely interesting β and at points, alarming β things to say about her time working under Altman.
Appearing in a video deposition on Wednesday, former OpenAI CTO and current Thinking Machines Lab CEO Mira Murati, while under threat of perjury, had much to say about Altman’s long-alleged perfidy β a rumored trait so widely discussed that it was the subject of a remarkable investigation by The New Yorker just last month.
Perhaps most strikingly, Murati reportedly told lawyers during her deposition that Altman once incorrectly told her that OpenAI’s legal team had cleared a new AI model to bypass an internal safety board tasked with reviewing new models before release. Asked whether she believed Altman “was telling the truth when he made that statement” to Murati, the former CTO hit back with a simple: “no.”
In other words: under oath, the former CTO β and briefly interim CEO β of the company behind the world’s most popular chatbot, ChatGPT, said that OpenAI’s still-reigning head exec falsely told her that lawyers had greenlit the company to leapfrog over certain safety protocols when, to the then-CTO’s understanding, that wasn’t true. Yikes!
Altman is the defendant in the case brought by Musk, who’s claiming that OpenAI illegally betrayed the company’s non-profit founding by transfiguring into a for-profit company last year. (Musk runs his own for-profit AI company, xAI, so make of his motivations what you will.)
Much of the court battle has centered on the event that insiders have referred to as the “Blip,” or the dizzying multi-day spell in November 2023 when OpenAI’s board suddenly pushed Altman out, alleging he “was not consistently candid in his communications with the board” as its reason for the shock firing. After pushback from staff and intervention from key OpenAI investor Microsoft, however, Altman was rehired just days later β a triumphant return that kicked off a domino-like chain of other departures, Murati’s included.
Murati was grilled about her experience of Altman in the lead-up to the “Blip,” with lawyers reportedly asking her if “by fall of 2023, did you perceive Altman was not candid with you? Truthful? Honest?”
“Not always,” Murati responded.
Lawyers went on to ask if Altman “undermined” Murati in her role as CTO and whether Altman pitted “other execs against one another,” both questions to which Murati straightforwardly responded: “yes.”
More on Musk v Altman: Elon Musk Just Got Badly Humiliated in Court
The post Under Threat of Perjury, OpenAI’s Former CTO Is Admitting Some Very Interesting Stuff About Sam Altman appeared first on Futurism.
π Sumber: futurism.com
π€ Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
β Update berikutnya dalam 30 menit β tema random menanti!
