MAROKO133 Update ai: AI has redefined the talent game. Here’s how leaders are responding.

📌 MAROKO133 Breaking ai: AI has redefined the talent game. Here’s how leaders are

Presented by Indeed


As AI continues to reshape how we work, organizations are rethinking what skills they need, how they hire, and how they retain talent. According to Indeed’s 2025 Tech Talent report, tech job postings are still down more than 30% from pre-pandemic highs, yet demand for AI expertise has never been greater. New roles are emerging almost overnight, from prompt engineers to AI operations managers, and leaders are under growing pressure to close skill gaps while supporting their teams through change.

Shibani Ahuja, SVP of enterprise IT strategy at Salesforce; Matt Candy, global managing partner of generative AI strategy and transformation at IBM; and Jessica Hardeman, global head of attraction and engagement at Indeed came together for a recent roundtable conversation about the future of tech talent strategy, from hiring and reskilling to how it's reshaping the workforce.

Strategies for sourcing talent

To find the right candidates, organizations need to be certain their communication is clear from the get-go, and that means beginning with a well-thought-out job description, Hardeman said.

"How clearly are you outlining the skills that are actually required for the role, versus using very high-level or ambiguous language," she said. "Something that I also highly recommend is skill-cluster sourcing. We use that to identify candidates that might be adjacent to these harder-to-find niche skills. That’s something we can upskill people into. For example, skills that are in distributed computing or machine learning frameworks also share other high-value capabilities. Using these clusters can help recruiters identify candidates that may not have that exact skill set you’re looking for, but can quickly upskill into it."

Recruiters should also be upskilled, able to spot that potential in candidates. And once they're hired, companies have to be intentional about how they’re growing talent from the day they step in the door.

"What that means in the near term is focusing on the mentorship, embedding that AI fluency into their onboarding experience, into their growth, into their development," she said. "That means offering upskilling that teaches not just the tools they’ll need, but how to think with those tools and alongside those. The new early career sweet spot is where technical skills meet our human strengths. Curiosity. Communication. Data judgment. Workflow design. Those are the things that AI cannot replicate or replace. We have to create mentorship and sponsorship opportunities. Well-being and culture are critical components to ensuring that we’re creating good places for that early-in-career talent to land."

How work will evolve along AI

As AI becomes embedded into daily technical work, organizations are rethinking what it means to be a developer, designer, or engineer. Instead of automating roles end to end, companies are increasingly building AI agents that act as teammates, supporting workers across the entire software development lifecycle.

Candy explained that IBM is already seeing this shift in action through its Consulting Advantage platform, which serves as a unified AI experience layer for consultants and technical teams.

“This is a platform that every one of our consultants works with,” he said. “It’s supported by every piece of AI technology and model out there. It’s the place where our consultants can access thousands of agents that help them in each job role and activity they’re doing.”

These aren’t just prebuilt tools — teams can create and publish their own agents into an internal marketplace. That has sparked a systematic effort to map every task across traditional tech roles and build agents to enhance them.

“If I think about your traditional designer, DevOps engineer, AI Ops engineer — what are all the different agents that are supporting them in those activities?” Candy said. “It’s far more than just coding. Tools like Cursor, Windsurf, and GitHub Copilot accelerate coding, but that’s only one part of delivering software end to end. We’re building agents to support people at every stage of that journey.”

Candy said this shift leads toward a workplace where AI becomes a collaborative partner rather than a replacement, something that enables tech workers to spend more time on creative, strategic, and human-centered tasks.

"This future where employees have agents working alongside them, taking care of some of these repetitive activities, focusing on higher-value strategic work where human skills are innately important, I think becomes right at the heart of that,” he explained. “You have to unleash the organization to be able to think and rethink in that way."

A lot of that depends on the mindset of company leaders, Ahuja said.

"I can see the difference between leaders that look at AI as cost-cutting, reduction — it’s a bottom-line activity,” she said. “And then there are organizations that are starting to shift their mindset to say, no, the goal is not about replacing people. It’s about reimagining the work to make us humans more human, ironically. For some leaders that’s the story their PR teams have told them to say. But for those that actually believe that AI is about helping us become more human, it’s interesting how they’re bringing that to life and bridging this gap between humanity and digital labor."

Shifting the culture toward AI

The companies that are most successful at navigating the obstacles around successful AI implementation and culture change make employees their first priority, Ahuja added. They prioritize use cases that solve the most boring problems that are burdening their teams, demonstrating how AI will help, as opposed to looking at what the maximum number of jobs automation can replace.

"They’re thinking of it as preserving human accountability, so in high-stakes moments, people will still make that final call," she said. "Looking at where AI is going to excel at scale and speed with pattern recognition, leaving that space for humans to bring their judgement, their ethics, and their emotional intelligence. It seems like a very subtle shift, but it’s pretty big in terms of where it starts at the beginning of an organization and how it trickles down."

It's also important to build a level of comfort in using AI in employees’ day-to-day work. Salesforce created a Slack chat called Bite-Sized AI in which they encourage every colleague, including company leaders, to talk about where they're using AI and why, and what hacks they've found.

"That’s creating a safe space," Ahuja explained. "It’s creating that psychological safety — that this isn’t just a buzzword. We’re trying to encourage it through behavior."

"This is all about how you ignite, especially in big enterprises, the kind of passion and fire inside everyone’s belly," Candy added. "Storytelling, showing examples of what great looks like. The expression is 'demos, not memos'. Stop writing PowerPoint slides explaining what we're going to do and actually getting into the tools to show it in real life.”

AI makes that continuous learning a non-negotiable, Hardeman added, with companies training employees in understanding how to use the AI tools they're provided, and that goes a long way toward building that AI culture.

"We view upskilling as a retention lever and a performance driver," she said. "It creates that confidence, it reduces the fear around AI adoption. It helps people see a future for themselves as the technology evolves. AI didn’t just raise the bar on skills. It raised the bar on how we’re trying to support…

Konten dipersingkat otomatis.

🔗 Sumber: venturebeat.com


📌 MAROKO133 Hot ai: ByteDance Introduces Astra: A Dual-Model Architecture for Auto

The increasing integration of robots across various sectors, from industrial manufacturing to daily life, highlights a growing need for advanced navigation systems. However, contemporary robot navigation systems face significant challenges in diverse and complex indoor environments, exposing the limitations of traditional approaches. Addressing the fundamental questions of “Where am I?”, “Where am I going?”, and “How do I get there?”, ByteDance has developed Astra, an innovative dual-model architecture designed to overcome these traditional navigation bottlenecks and enable general-purpose mobile robots.

Traditional navigation systems typically consist of multiple, smaller, and often rule-based modules to handle the core challenges of target localization, self-localization, and path planning. Target localization involves understanding natural language or image cues to pinpoint a destination on a map. Self-localization requires a robot to determine its precise position within a map, especially challenging in repetitive environments like warehouses where traditional methods often rely on artificial landmarks (e.g., QR codes). Path planning further divides into global planning for rough route generation and local planning for real-time obstacle avoidance and reaching intermediate waypoints.

While foundation models have shown promise in integrating smaller models to tackle broader tasks, the optimal number of models and their effective integration for comprehensive navigation remained an open question.

ByteDance’s Astra, detailed in their paper “Astra: Toward General-Purpose Mobile Robots via Hierarchical Multimodal Learning” (website: https://astra-mobility.github.io/), addresses these limitations. Following the System 1/System 2 paradigm, Astra features two primary sub-models: Astra-Global and Astra-Local. Astra-Global handles low-frequency tasks like target and self-localization, while Astra-Local manages high-frequency tasks such as local path planning and odometry estimation. This architecture promises to revolutionize how robots navigate complex indoor spaces.

Astra-Global: The Intelligent Brain for Global Localization

Astra-Global serves as the intelligent core of the Astra architecture, responsible for critical low-frequency tasks: self-localization and target localization. It functions as a Multimodal Large Language Model (MLLM), adept at processing both visual and linguistic inputs to achieve precise global positioning within a map. Its strength lies in utilizing a hybrid topological-semantic graph as contextual input, allowing the model to accurately locate positions based on query images or text prompts.

The construction of this robust localization system begins with offline mapping. The research team developed an offline method to build a hybrid topological-semantic graph G=(V,E,L):

  • V (Nodes): Keyframes, obtained by temporal downsampling of input video and SfM-estimated 6-Degrees-of-Freedom (DoF) camera poses, act as nodes encoding camera poses and landmark references.
  • E (Edges): Undirected edges establish connectivity based on relative node poses, crucial for global path planning.
  • L (Landmarks): Semantic landmark information is extracted by Astra-Global from visual data at each node, enriching the map’s semantic understanding. These landmarks store semantic attributes and are connected to multiple nodes via co-visibility relationships.

In practical localization, Astra-Global’s self-localization and target localization capabilities leverage a coarse-to-fine two-stage process for visual-language localization. The coarse stage analyzes input images and localization prompts, detects landmarks, establishes correspondence with a pre-built landmark map, and filters candidates based on visual consistency. The fine stage then uses the query image and coarse output to sample reference map nodes from the offline map, comparing their visual and positional information to directly output the predicted pose.

For language-based target localization, the model interprets natural language instructions, identifies relevant landmarks using their functional descriptions within the map, and then leverages landmark-to-node association mechanisms to locate relevant nodes, retrieving target images and 6-DoF poses.

To empower Astra-Global with robust localization abilities, the team employed a meticulous training methodology. Using Qwen2.5-VL as the backbone, they combined Supervised Fine-Tuning (SFT) with Group Relative Policy Optimization (GRPO). SFT involved diverse datasets for various tasks, including coarse and fine localization, co-visibility detection, and motion trend estimation. In the GRPO phase, a rule-based reward function (including format, landmark extraction, map matching, and extra landmark rewards) was used to train for visual-language localization. Experiments showed GRPO significantly improved Astra-Global’s zero-shot generalization, achieving 99.9% localization accuracy in unseen home environments, surpassing SFT-only methods.

Astra-Local: The Intelligent Assistant for Local Planning

Astra-Local acts as the intelligent assistant for Astra’s high-frequency tasks, a multi-task network capable of efficiently generating local paths and accurately estimating odometry from sensor data. Its architecture comprises three core components: a 4D spatio-temporal encoder, a planning head, and an odometry head.

The 4D spatio-temporal encoder replaces traditional mobile stack perception and prediction modules. It begins with a 3D spatial encoder that processes N omnidirectional images through a Vision Transformer (ViT) and Lift-Splat-Shoot to convert 2D image features into 3D voxel features. This 3D encoder is trained using self-supervised learning via 3D volumetric differentiable neural rendering. The 4D spatio-temporal encoder then builds upon the 3D encoder, taking past voxel features and future timestamps as input to predict future voxel features through ResNet and DiT modules, providing current and future environmental representations for planning and odometry.

The planning head, based on pre-trained 4D features, robot speed, and task information, generates executable trajectories using Transformer-based flow matching. To prevent collisions, the planning head incorporates a masked ESDF loss (Euclidean Signed Distance Field). This loss calculates the ESDF of a 3D occupancy map and applies a 2D ground truth trajectory mask, significantly reducing collision rates. Experiments demonstrate its superior performance in collision rate and overall score on out-of-distribution (OOD) datasets compared to other methods.

The odometry head predicts the robot’s relative pose using current and past 4D features and additional sensor data (e.g., IMU, wheel data). It trains a Transformer model to fuse information from different sensors. Each sensor modality is processed by a specific tokenizer, combined with modality embeddings and temporal positional embeddi…

Konten dipersingkat otomatis.

🔗 Sumber: syncedreview.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna