MAROKO133 Eksklusif ai: Arbor Energy sells 5 GW of modular turbines as data center power d

📌 MAROKO133 Breaking ai: Arbor Energy sells 5 GW of modular turbines as data cente

Energy startup Arbor Energy has signed a major turbine supply agreement with GridMarket, selling up to 5 gigawatts of its modular Halcyon turbine systems in a deal driven by rising electricity demand from data centers and industrial users.

The agreement could represent roughly 200 turbine units, each designed to generate 25 megawatts of electricity. While the companies did not disclose the official price, a source familiar with the deal told TechCrunch the total value is in the single-digit billions of dollars.

The deal highlights the growing urgency around electricity generation capacity, especially as AI data centers and industrial electrification continue to increase power demand globally.

The rocket engine technology

Arbor’s Halcyon turbines are based on rocket turbomachinery, adapting high-performance engine technology originally developed for spaceflight into power generation systems.

The first commercial turbines will be 3D printed, which the company believes will allow faster production compared to traditional turbine manufacturing.

The startup plans to connect its first turbine to the grid in 2028 and ramp production through 2030, eventually aiming to deliver more than 100 turbines annually.

Long-term, Arbor hopes to produce enough systems to add 10 gigawatts of new capacity every year.

Arbor originally designed Halcyon turbines to run on biomass such as crop waste and wood scraps. The organic material would be converted into syngas and burned with pure oxygen, producing a stream of carbon dioxide that could be captured and stored underground.

Under that configuration, the turbines could generate carbon-negative power because the biomass would otherwise decay and release greenhouse gases.

Reducing carbon benefits

Since then, Arbor modified the turbine system to also run on natural gas, allowing wider deployment. In this configuration, the turbines would no longer be carbon negative, but the company says carbon capture could still significantly reduce emissions.

“We see a long-term path to less than 10 grams of CO2 per kilowatt-hour,” Hartwig said.

That figure is significantly lower than conventional natural gas plants, which typically emit around 400 grams of CO2 per kilowatt-hour without carbon capture.

The company says its turbines can capture carbon dioxide produced during combustion, which can then be stored underground, potentially reducing overall emissions compared to traditional gas power plants.

Data center boom

The surge in demand for electricity from data centers has been a major driver behind the deal. Traditional gas turbine manufacturers have struggled to scale production quickly due to supply chain constraints, particularly around turbine blades and specialized manufacturing processes.

“Those supply chains largely all get bottlenecked by blades and vanes for traditional turbines… If you were to get in line for a turbine today, you’d be waiting until 2032,” Hartwig said.

Arbor believes its use of machined and 3D-printed components could help accelerate deployment timelines and bring new power capacity online faster.

“People want power in the next few years and they want a lot of it,” Hartwig said.

The GridMarket agreement suggests that modular turbine systems could play an increasingly important role in meeting the rapidly growing power demands of AI infrastructure, industrial electrification, and data centers over the next decade.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Update ai: Adobe Research Unlocking Long-Term Memory in Video World Mo

Video world models, which predict future frames conditioned on actions, hold immense promise for artificial intelligence, enabling agents to plan and reason in dynamic environments. Recent advancements, particularly with video diffusion models, have shown impressive capabilities in generating realistic future sequences. However, a significant bottleneck remains: maintaining long-term memory. Current models struggle to remember events and states from far in the past due to the high computational cost associated with processing extended sequences using traditional attention layers. This limits their ability to perform complex tasks requiring sustained understanding of a scene.

A new paper, “Long-Context State-Space Video World Models” by researchers from Stanford University, Princeton University, and Adobe Research, proposes an innovative solution to this challenge. They introduce a novel architecture that leverages State-Space Models (SSMs) to extend temporal memory without sacrificing computational efficiency.

The core problem lies in the quadratic computational complexity of attention mechanisms with respect to sequence length. As the video context grows, the resources required for attention layers explode, making long-term memory impractical for real-world applications. This means that after a certain number of frames, the model effectively “forgets” earlier events, hindering its performance on tasks that demand long-range coherence or reasoning over extended periods.

The authors’ key insight is to leverage the inherent strengths of State-Space Models (SSMs) for causal sequence modeling. Unlike previous attempts that retrofitted SSMs for non-causal vision tasks, this work fully exploits their advantages in processing sequences efficiently.

The proposed Long-Context State-Space Video World Model (LSSVWM) incorporates several crucial design choices:

  1. Block-wise SSM Scanning Scheme: This is central to their design. Instead of processing the entire video sequence with a single SSM scan, they employ a block-wise scheme. This strategically trades off some spatial consistency (within a block) for significantly extended temporal memory. By breaking down the long sequence into manageable blocks, they can maintain a compressed “state” that carries information across blocks, effectively extending the model’s memory horizon.
  2. Dense Local Attention: To compensate for the potential loss of spatial coherence introduced by the block-wise SSM scanning, the model incorporates dense local attention. This ensures that consecutive frames within and across blocks maintain strong relationships, preserving the fine-grained details and consistency necessary for realistic video generation. This dual approach of global (SSM) and local (attention) processing allows them to achieve both long-term memory and local fidelity.

The paper also introduces two key training strategies to further improve long-context performance:

  • Diffusion Forcing: This technique encourages the model to generate frames conditioned on a prefix of the input, effectively forcing it to learn to maintain consistency over longer durations. By sometimes not sampling a prefix and keeping all tokens noised, the training becomes equivalent to diffusion forcing, which is highlighted as a special case of long-context training where the prefix length is zero. This pushes the model to generate coherent sequences even from minimal initial context.
  • Frame Local Attention: For faster training and sampling, the authors implemented a “frame local attention” mechanism. This utilizes FlexAttention to achieve significant speedups compared to a fully causal mask. By grouping frames into chunks (e.g., chunks of 5 with a frame window size of 10), frames within a chunk maintain bidirectionality while also attending to frames in the previous chunk. This allows for an effective receptive field while optimizing computational load.

The researchers evaluated their LSSVWM on challenging datasets, including Memory Maze and Minecraft, which are specifically designed to test long-term memory capabilities through spatial retrieval and reasoning tasks.

The experiments demonstrate that their approach substantially surpasses baselines in preserving long-range memory. Qualitative results, as shown in supplementary figures (e.g., S1, S2, S3), illustrate that LSSVWM can generate more coherent and accurate sequences over extended periods compared to models relying solely on causal attention or even Mamba2 without frame local attention. For instance, on reasoning tasks for the maze dataset, their model maintains better consistency and accuracy over long horizons. Similarly, for retrieval tasks, LSSVWM shows improved ability to recall and utilize information from distant past frames. Crucially, these improvements are achieved while maintaining practical inference speeds, making the models suitable for interactive applications.

The Paper Long-Context State-Space Video World Models is on arXiv

The post Adobe Research Unlocking Long-Term Memory in Video World Models with State-Space Models first appeared on Synced.

🔗 Sumber: syncedreview.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna