π MAROKO133 Breaking ai: Google Alarmed by Formidable AI-Powered Zero-Day Cyberatt
Google was rattled by a cyberattack that used AI to unearth a major flaw in its software that its own developers had no idea about.
The attack, which the New York Times reports was ultimately thwarted, was revealed by researchers at the tech giant on Monday. Their report didn’t specify who the actors behind it might be or when it occurred, but it was clear about what cutting-edge technology was at the heart of it.
“We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” reads the report.
Google said the hackers used AI to identify what’s known as a zero-day vulnerability, a flaw in a piece of software that wasn’t previously known to its developers. When exploited, they leave the developers on the back foot, as the hackers are free to wreak havoc until the white hats figure out how to plug the hole.Β
In this case, the zero-day bug would’ve allowed the hackers to bypass two-factor authentication on an unspecified “popular open-source, web-based system administration tool,” but only if the attackers knew a person’s user name and password. Given that two-factor authentication is the last meaningful line of defense for most users, and that their passwords are likely weak if they weren’t already leaked online in the first place, the ability to sidestep it could’ve been catastrophic even if the hackers weren’t armed with that information.
“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” the report stated.
The researchers said this was the first example of a zero-day vulnerability being exploited by hackers that was developed with AI.
“It’s a taste of what’s to come,” John Hultquist, the chief analyst at Google Threat Intelligence Group, which published the report, told the NYT. “We believe this is the tip of the iceberg. This problem is probably much bigger; this is just the first tangible evidence that we can see.”
The attack will add to the atmosphere of unease around AI’s implications for cybersecurity, particularly with the release of Anthropic’s Claude Mythos model last month. Anthropic claimed that the AI system could find zero-day vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” a capability so potentially devastating that the company made a show of only sharing the model with a select group of companies and government agencies. Its rollout has drawn alarm from government leaders and security experts alike.
AI’s cybersecurity threat derives from its much-touted and ever-improving ability to write and parse code, which is being rapidly embraced by businesses across the tech and financial sectors. Like AI prose, AI code bears its own hallmarks, albeit more subtle. The Google researchers found that hacker’s malware contained an abundance of annotations that explain its code called docstrings, some hallucinated text, and “a structured, textbook Pythonic format highly characteristic of LLMs training data.”
More on AI: Vibe Coded Apps Are Spilling Usersβ Personal Information Directly Into the Maw of Greedy Hackers
The post Google Alarmed by Formidable AI-Powered Zero-Day Cyberattack appeared first on Futurism.
π Sumber: futurism.com
π MAROKO133 Breaking ai: ByteDance Introduces Astra: A Dual-Model Architecture for
The increasing integration of robots across various sectors, from industrial manufacturing to daily life, highlights a growing need for advanced navigation systems. However, contemporary robot navigation systems face significant challenges in diverse and complex indoor environments, exposing the limitations of traditional approaches. Addressing the fundamental questions of “Where am I?”, “Where am I going?”, and “How do I get there?”, ByteDance has developed Astra, an innovative dual-model architecture designed to overcome these traditional navigation bottlenecks and enable general-purpose mobile robots.
Traditional navigation systems typically consist of multiple, smaller, and often rule-based modules to handle the core challenges of target localization, self-localization, and path planning. Target localization involves understanding natural language or image cues to pinpoint a destination on a map. Self-localization requires a robot to determine its precise position within a map, especially challenging in repetitive environments like warehouses where traditional methods often rely on artificial landmarks (e.g., QR codes). Path planning further divides into global planning for rough route generation and local planning for real-time obstacle avoidance and reaching intermediate waypoints.
While foundation models have shown promise in integrating smaller models to tackle broader tasks, the optimal number of models and their effective integration for comprehensive navigation remained an open question.
ByteDance’s Astra, detailed in their paper “Astra: Toward General-Purpose Mobile Robots via Hierarchical Multimodal Learning” (website: https://astra-mobility.github.io/), addresses these limitations. Following the System 1/System 2 paradigm, Astra features two primary sub-models: Astra-Global and Astra-Local. Astra-Global handles low-frequency tasks like target and self-localization, while Astra-Local manages high-frequency tasks such as local path planning and odometry estimation. This architecture promises to revolutionize how robots navigate complex indoor spaces.
Astra-Global: The Intelligent Brain for Global Localization
Astra-Global serves as the intelligent core of the Astra architecture, responsible for critical low-frequency tasks: self-localization and target localization. It functions as a Multimodal Large Language Model (MLLM), adept at processing both visual and linguistic inputs to achieve precise global positioning within a map. Its strength lies in utilizing a hybrid topological-semantic graph as contextual input, allowing the model to accurately locate positions based on query images or text prompts.
The construction of this robust localization system begins with offline mapping. The research team developed an offline method to build a hybrid topological-semantic graph G=(V,E,L):
- V (Nodes): Keyframes, obtained by temporal downsampling of input video and SfM-estimated 6-Degrees-of-Freedom (DoF) camera poses, act as nodes encoding camera poses and landmark references.
- E (Edges): Undirected edges establish connectivity based on relative node poses, crucial for global path planning.
- L (Landmarks): Semantic landmark information is extracted by Astra-Global from visual data at each node, enriching the map’s semantic understanding. These landmarks store semantic attributes and are connected to multiple nodes via co-visibility relationships.
In practical localization, Astra-Global’s self-localization and target localization capabilities leverage a coarse-to-fine two-stage process for visual-language localization. The coarse stage analyzes input images and localization prompts, detects landmarks, establishes correspondence with a pre-built landmark map, and filters candidates based on visual consistency. The fine stage then uses the query image and coarse output to sample reference map nodes from the offline map, comparing their visual and positional information to directly output the predicted pose.
For language-based target localization, the model interprets natural language instructions, identifies relevant landmarks using their functional descriptions within the map, and then leverages landmark-to-node association mechanisms to locate relevant nodes, retrieving target images and 6-DoF poses.
To empower Astra-Global with robust localization abilities, the team employed a meticulous training methodology. Using Qwen2.5-VL as the backbone, they combined Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO). SFT involved diverse datasets for various tasks, including coarse and fine localization, co-visibility detection, and motion trend estimation. In the GRPO phase, a rule-based reward function (including format, landmark extraction, map matching, and extra landmark rewards) was used to train for visual-language localization. Experiments showed GRPO significantly improved Astra-Global’s zero-shot generalization, achieving 99.9% localization accuracy in unseen home environments, surpassing SFT-only methods.
Astra-Local: The Intelligent Assistant for Local Planning
Astra-Local acts as the intelligent assistant for Astra’s high-frequency tasks, a multi-task network capable of efficiently generating local paths and accurately estimating odometry from sensor data. Its architecture comprises three core components: a 4D spatio-temporal encoder, a planning head, and an odometry head.
The 4D spatio-temporal encoder replaces traditional mobile stack perception and prediction modules. It begins with a 3D spatial encoder that processes N omnidirectional images through a Vision Transformer (ViT) and Lift-Splat-Shoot to convert 2D image features into 3D voxel features. This 3D encoder is trained using self-supervised learning via 3D volumetric differentiable neural rendering. The 4D spatio-temporal encoder then builds upon the 3D encoder, taking past voxel features and future timestamps as input to predict future voxel features through ResNet and DiT modules, providing current and future environmental representations for planning and odometry.
The planning head, based on pre-trained 4D features, robot speed, and task information, generates executable trajectories using Transformer-based flow matching. To prevent collisions, the planning head incorporates a masked ESDF loss (Euclidean Signed Distance Field). This loss calculates the ESDF of a 3D occupancy map and applies a 2D ground truth trajectory mask, significantly reducing collision rates. Experiments demonstrate its superior performance in collision rate and overall score on out-of-distribution (OOD) datasets compared to other methods.
The odometry head predicts the robot’s relative pose using current and past 4D features and additional sensor data (e.g., IMU, wheel data). It trains a Transformer model to fuse information from different sensors. Each sensor modality is processed by a specific tokenizer, combined with modality embeddings and temporal positional embeddi…
Konten dipersingkat otomatis.
π Sumber: syncedreview.com
π€ Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
β Update berikutnya dalam 30 menit β tema random menanti!