MAROKO133 Hot ai: AWS launches Kiro powers with Stripe, Figma, and Datadog integrations fo

πŸ“Œ MAROKO133 Update ai: AWS launches Kiro powers with Stripe, Figma, and Datadog in

Amazon Web Services (AWS) has introduced Kiro powers, a system that allows software developers to give their AI coding assistants instant, specialized expertise in specific tools and workflows β€” addressing what the company calls a fundamental bottleneck in how AI agents operate today.

AWS announced Kiro powers at its annual re:Invent conference in Las Vegas. The capability marks a departure from how most AI coding tools work today. Typically, these tools load every possible capability into memory upfront β€” a process that burns through computational resources and can overwhelm the AI with irrelevant information. Kiro powers takes the opposite approach, activating specialized knowledge only at the moment a developer actually needs it.

"Our goal is to give the agent specialized context so it can reach the right outcome faster β€” and in a way that also reduces cost," Deepak Singh, VP of developer agents and experiences at Amazon, told VentureBeat in an exclusive interview.

The launch includes partnerships with nine technology companies: Datadog, Dynatrace, Figma, Neon, Netlify, Postman, Stripe, Supabase and AWS's own services. Developers can also create and share their powers with the community.

Why AI coding assistants choke when developers connect too many tools

Kiro powers comes amidst growing tension in the AI development tool market.

Modern AI coding assistants rely on Model Context Protocol (MCP) to connect with external tools and services. When a developer wants their AI assistant to work with Stripe for payments, Figma for design and Supabase for databases, they connect MCP servers for each service.

The problem: Each connection loads dozens of tool definitions into the AI's working memory before it writes a single line of code. According to AWS documentation, connecting just five MCP servers can consume more than 50,000 tokens β€” roughly 40% of an AI model's context window β€” before the developer even types their first request.

Developers have grown increasingly vocal about this issue. Many complain that they don't want to burn through their token allocations just to have an AI agent figure out which tools are relevant to a specific task. They want to get to their workflow instantly β€” not watch an overloaded agent struggle to sort through irrelevant context.

This phenomenon, which some in the industry call "context rot," leads to slower responses, lower-quality outputs and significantly higher costs β€” since AI services typically charge by the token.

Inside the technology that loads AI expertise on demand

Kiro powers addresses this by packaging three components into a single, dynamically-loaded bundle.

The first is a steering file, POWER.md, which functions as an onboarding manual. It tells the AI agent what tools are available and, crucially, when to use them. The second component is the MCP server configuration itself β€” the actual connection to external services. The third includes optional hooks and automation that trigger specific actions.

When a developer mentions "payment" or "checkout" in their conversation with Kiro, the system automatically activates the Stripe power, loading its tools and best practices into context. When the developer shifts to database work, Supabase activates while Stripe deactivates. The baseline context usage when no powers are active approaches zero.

"You click a button and it automatically loads," Singh said. "Once a power has been created, developers just select 'open in Kiro' and it launches the IDE with everything ready to go."

How AWS is bringing elite developer techniques to the masses

Singh framed Kiro powers as a democratization of advanced development practices. Before this capability, only the most sophisticated developers knew how to properly configure their AI agents with specialized context β€” writing custom steering files, crafting precise prompts and manually managing which tools were active at any given time.

"We've found that our developers were adding in capabilities to make their agents more specialized," Singh said. "They wanted to give the agent some special powers for a specific problem. For example, they wanted … the agent to become an expert at backend-as-a-service."

This observation led to a key insight: If Supabase or Stripe could build the optimal context configuration once, every developer using those services could benefit.

"Kiro powers formalizes things that only the most advanced people were doing, and allows anyone to get those kinds of skills," Singh said.

Why dynamic loading beats fine-tuning for most AI coding use cases

The announcement also positions Kiro powers as a more economical alternative to fine-tuning, or the process of training an AI model on specialized data to improve its performance in specific domains.

"It's much cheaper" compared to fine-tuning, Singh. "Fine-tuning is very expensive, and you can't fine-tune most frontier models."

This is a significant point. The most capable AI models from Anthropic, OpenAI and Google are typically "closed source," meaning developers cannot modify their underlying training. They can only influence the models' behavior through the prompts and context they provide.

"Most people are already using powerful models like Sonnet 4.5 or Opus 4.5," Singh said. "Those models need to be pointed in the right direction."

The dynamic loading mechanism also reduces ongoing costs. Because powers only activate when relevant, developers aren't paying for token usage on tools they're not currently using.

Where Kiro powers fits into Amazon's bigger bet on autonomous AI agents

Kiro powers arrives as part of a broader push by AWS into what the company calls "agentic AI" β€” AI systems that can operate autonomously over extended periods.

At re:Invent, AWS also announced three "frontier agents" designed to work for hours or days without human intervention: Kiro autonomous agent for software development, AWS security agent and AWS DevOps agent. These represent a different approach from Kiro powers β€” tackling large, ambiguous problems rather than providing specialized expertise for specific tasks.

The two approaches are complementary. Frontier agents handle complex, multi-day projects that require autonomous decision-making across multiple codebases. Kiro powers, by contrast, gives developers precise, efficient tools for everyday development tasks where speed and token efficiency matter most.

The company is betting that developers need both ends of this spectrum to be productive.

What Kiro powers reveals about the future of AI-assisted software development

The launch reflects a maturing market for AI development tools. GitHub Copilot, which Microsoft launched in 2021, introduced millions of developers to AI-assisted coding. Since…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ“Œ MAROKO133 Eksklusif ai: Adobe Research Unlocking Long-Term Memory in Video World

Video world models, which predict future frames conditioned on actions, hold immense promise for artificial intelligence, enabling agents to plan and reason in dynamic environments. Recent advancements, particularly with video diffusion models, have shown impressive capabilities in generating realistic future sequences. However, a significant bottleneck remains: maintaining long-term memory. Current models struggle to remember events and states from far in the past due to the high computational cost associated with processing extended sequences using traditional attention layers. This limits their ability to perform complex tasks requiring sustained understanding of a scene.

A new paper, “Long-Context State-Space Video World Models” by researchers from Stanford University, Princeton University, and Adobe Research, proposes an innovative solution to this challenge. They introduce a novel architecture that leverages State-Space Models (SSMs) to extend temporal memory without sacrificing computational efficiency.

The core problem lies in the quadratic computational complexity of attention mechanisms with respect to sequence length. As the video context grows, the resources required for attention layers explode, making long-term memory impractical for real-world applications. This means that after a certain number of frames, the model effectively “forgets” earlier events, hindering its performance on tasks that demand long-range coherence or reasoning over extended periods.

The authors’ key insight is to leverage the inherent strengths of State-Space Models (SSMs) for causal sequence modeling. Unlike previous attempts that retrofitted SSMs for non-causal vision tasks, this work fully exploits their advantages in processing sequences efficiently.

The proposed Long-Context State-Space Video World Model (LSSVWM) incorporates several crucial design choices:

  1. Block-wise SSM Scanning Scheme: This is central to their design. Instead of processing the entire video sequence with a single SSM scan, they employ a block-wise scheme. This strategically trades off some spatial consistency (within a block) for significantly extended temporal memory. By breaking down the long sequence into manageable blocks, they can maintain a compressed “state” that carries information across blocks, effectively extending the model’s memory horizon.
  2. Dense Local Attention: To compensate for the potential loss of spatial coherence introduced by the block-wise SSM scanning, the model incorporates dense local attention. This ensures that consecutive frames within and across blocks maintain strong relationships, preserving the fine-grained details and consistency necessary for realistic video generation. This dual approach of global (SSM) and local (attention) processing allows them to achieve both long-term memory and local fidelity.

The paper also introduces two key training strategies to further improve long-context performance:

  • Diffusion Forcing: This technique encourages the model to generate frames conditioned on a prefix of the input, effectively forcing it to learn to maintain consistency over longer durations. By sometimes not sampling a prefix and keeping all tokens noised, the training becomes equivalent to diffusion forcing, which is highlighted as a special case of long-context training where the prefix length is zero. This pushes the model to generate coherent sequences even from minimal initial context.
  • Frame Local Attention: For faster training and sampling, the authors implemented a “frame local attention” mechanism. This utilizes FlexAttention to achieve significant speedups compared to a fully causal mask. By grouping frames into chunks (e.g., chunks of 5 with a frame window size of 10), frames within a chunk maintain bidirectionality while also attending to frames in the previous chunk. This allows for an effective receptive field while optimizing computational load.

The researchers evaluated their LSSVWM on challenging datasets, including Memory Maze and Minecraft, which are specifically designed to test long-term memory capabilities through spatial retrieval and reasoning tasks.

The experiments demonstrate that their approach substantially surpasses baselines in preserving long-range memory. Qualitative results, as shown in supplementary figures (e.g., S1, S2, S3), illustrate that LSSVWM can generate more coherent and accurate sequences over extended periods compared to models relying solely on causal attention or even Mamba2 without frame local attention. For instance, on reasoning tasks for the maze dataset, their model maintains better consistency and accuracy over long horizons. Similarly, for retrieval tasks, LSSVWM shows improved ability to recall and utilize information from distant past frames. Crucially, these improvements are achieved while maintaining practical inference speeds, making the models suitable for interactive applications.

The Paper Long-Context State-Space Video World Models is on arXiv

The post Adobe Research Unlocking Long-Term Memory in Video World Models with State-Space Models first appeared on Synced.

πŸ”— Sumber: syncedreview.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna