MAROKO133 Eksklusif ai: Eligible iPhone users could get up to $95 in Apple’s $250M Siri AI

📌 MAROKO133 Hot ai: Eligible iPhone users could get up to $95 in Apple’s $250M Sir

Apple’s AI ambitions may soon put a little money back into customers’ pockets. Some iPhone owners in the United States could receive payouts of up to $95 after Apple agreed to settle a $250 million class-action lawsuit tied to its heavily promoted Siri AI features.

The lawsuit accused Apple of advertising advanced artificial intelligence capabilities that were not available when certain iPhones reached consumers. Plaintiffs argued the company sold buyers on a smarter Siri experience that remained delayed long after launch.

The proposed settlement still requires approval from a federal judge before payments can go out.

Siri features delayed

Apple unveiled its Apple Intelligence platform during the iPhone 16 launch cycle in 2024. The company promoted a new generation of Siri features alongside the iPhone 16 lineup and select iPhone 15 Pro models.

Apple positioned the AI upgrades as a major selling point. The company promised a more personalized Siri assistant with stronger contextual awareness and deeper app integration. But consumers alleged those features failed to appear when the devices launched.

The lawsuit, initially filed by California resident Peter Landsheft in March 2025, claimed Apple misled buyers through aggressive AI-focused marketing campaigns. Additional plaintiffs later joined the case in federal court in San Francisco. According to court filings, the complaint said Apple “deceived millions of consumers into spending hundreds of dollars on a phone they did not need, based on features that do not exist.”

The filing also stated Apple was caught off-guard by consumer demand for the Siri AI tools. Buyers reportedly became frustrated after learning the features would arrive later than expected. Apple still has not fully delivered the Siri overhaul nearly two years after first promoting the upgrades.

Apple settles lawsuit

Apple denied the allegations in the lawsuit and maintained it acted properly. In a statement reported by USA TODAY, Apple said it resolved the case in an effort to continue “delivering the most innovative products and services to our users.”

The company also issued another statement cited by the Associated Press. Apple said, “Apple has reached a settlement to resolve claims related to the availability of two additional features.” The statement continued, “We resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users.”

Court documents showed Apple defended its broader AI rollout during settlement discussions. The company said it already launched more than 20 Apple Intelligence features and plans to release more Siri-related AI tools through future software updates. Both parties filed the proposed settlement agreement on May 5. A federal judge will review the deal during a hearing scheduled for June.

Who can get paid?

If approved, the settlement will cover consumers in the United States who purchased eligible devices between June 10, 2024, and March 29, 2025. Eligible devices include the iPhone 16, iPhone 16e, iPhone 16 Plus, iPhone 16 Pro, iPhone 16 Pro Max, iPhone 15 Pro, and iPhone 15 Pro Max.

Consumers could receive at least $25 for each eligible device. The payout may increase to as much as $95 depending on the number of approved claims and other factors. Court filings said eligible customers will receive notifications by email or standard mail with instructions for filing claims through a settlement website.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Update ai: ByteDance Introduces Astra: A Dual-Model Architecture for A

The increasing integration of robots across various sectors, from industrial manufacturing to daily life, highlights a growing need for advanced navigation systems. However, contemporary robot navigation systems face significant challenges in diverse and complex indoor environments, exposing the limitations of traditional approaches. Addressing the fundamental questions of “Where am I?”, “Where am I going?”, and “How do I get there?”, ByteDance has developed Astra, an innovative dual-model architecture designed to overcome these traditional navigation bottlenecks and enable general-purpose mobile robots.

Traditional navigation systems typically consist of multiple, smaller, and often rule-based modules to handle the core challenges of target localization, self-localization, and path planning. Target localization involves understanding natural language or image cues to pinpoint a destination on a map. Self-localization requires a robot to determine its precise position within a map, especially challenging in repetitive environments like warehouses where traditional methods often rely on artificial landmarks (e.g., QR codes). Path planning further divides into global planning for rough route generation and local planning for real-time obstacle avoidance and reaching intermediate waypoints.

While foundation models have shown promise in integrating smaller models to tackle broader tasks, the optimal number of models and their effective integration for comprehensive navigation remained an open question.

ByteDance’s Astra, detailed in their paper “Astra: Toward General-Purpose Mobile Robots via Hierarchical Multimodal Learning” (website: https://astra-mobility.github.io/), addresses these limitations. Following the System 1/System 2 paradigm, Astra features two primary sub-models: Astra-Global and Astra-Local. Astra-Global handles low-frequency tasks like target and self-localization, while Astra-Local manages high-frequency tasks such as local path planning and odometry estimation. This architecture promises to revolutionize how robots navigate complex indoor spaces.

Astra-Global: The Intelligent Brain for Global Localization

Astra-Global serves as the intelligent core of the Astra architecture, responsible for critical low-frequency tasks: self-localization and target localization. It functions as a Multimodal Large Language Model (MLLM), adept at processing both visual and linguistic inputs to achieve precise global positioning within a map. Its strength lies in utilizing a hybrid topological-semantic graph as contextual input, allowing the model to accurately locate positions based on query images or text prompts.

The construction of this robust localization system begins with offline mapping. The research team developed an offline method to build a hybrid topological-semantic graph G=(V,E,L):

  • V (Nodes): Keyframes, obtained by temporal downsampling of input video and SfM-estimated 6-Degrees-of-Freedom (DoF) camera poses, act as nodes encoding camera poses and landmark references.
  • E (Edges): Undirected edges establish connectivity based on relative node poses, crucial for global path planning.
  • L (Landmarks): Semantic landmark information is extracted by Astra-Global from visual data at each node, enriching the map’s semantic understanding. These landmarks store semantic attributes and are connected to multiple nodes via co-visibility relationships.

In practical localization, Astra-Global’s self-localization and target localization capabilities leverage a coarse-to-fine two-stage process for visual-language localization. The coarse stage analyzes input images and localization prompts, detects landmarks, establishes correspondence with a pre-built landmark map, and filters candidates based on visual consistency. The fine stage then uses the query image and coarse output to sample reference map nodes from the offline map, comparing their visual and positional information to directly output the predicted pose.

For language-based target localization, the model interprets natural language instructions, identifies relevant landmarks using their functional descriptions within the map, and then leverages landmark-to-node association mechanisms to locate relevant nodes, retrieving target images and 6-DoF poses.

To empower Astra-Global with robust localization abilities, the team employed a meticulous training methodology. Using Qwen2.5-VL as the backbone, they combined Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO). SFT involved diverse datasets for various tasks, including coarse and fine localization, co-visibility detection, and motion trend estimation. In the GRPO phase, a rule-based reward function (including format, landmark extraction, map matching, and extra landmark rewards) was used to train for visual-language localization. Experiments showed GRPO significantly improved Astra-Global’s zero-shot generalization, achieving 99.9% localization accuracy in unseen home environments, surpassing SFT-only methods.

Astra-Local: The Intelligent Assistant for Local Planning

Astra-Local acts as the intelligent assistant for Astra’s high-frequency tasks, a multi-task network capable of efficiently generating local paths and accurately estimating odometry from sensor data. Its architecture comprises three core components: a 4D spatio-temporal encoder, a planning head, and an odometry head.

The 4D spatio-temporal encoder replaces traditional mobile stack perception and prediction modules. It begins with a 3D spatial encoder that processes N omnidirectional images through a Vision Transformer (ViT) and Lift-Splat-Shoot to convert 2D image features into 3D voxel features. This 3D encoder is trained using self-supervised learning via 3D volumetric differentiable neural rendering. The 4D spatio-temporal encoder then builds upon the 3D encoder, taking past voxel features and future timestamps as input to predict future voxel features through ResNet and DiT modules, providing current and future environmental representations for planning and odometry.

The planning head, based on pre-trained 4D features, robot speed, and task information, generates executable trajectories using Transformer-based flow matching. To prevent collisions, the planning head incorporates a masked ESDF loss (Euclidean Signed Distance Field). This loss calculates the ESDF of a 3D occupancy map and applies a 2D ground truth trajectory mask, significantly reducing collision rates. Experiments demonstrate its superior performance in collision rate and overall score on out-of-distribution (OOD) datasets compared to other methods.

The odometry head predicts the robot’s relative pose using current and past 4D features and additional sensor data (e.g., IMU, wheel data). It trains a Transformer model to fuse information from different sensors. Each sensor modality is processed by a specific tokenizer, combined with modality embeddings and temporal positional embeddi…

Konten dipersingkat otomatis.

🔗 Sumber: syncedreview.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna