📌 MAROKO133 Update ai: Adobe Research Unlocking Long-Term Memory in Video World Mo
Video world models, which predict future frames conditioned on actions, hold immense promise for artificial intelligence, enabling agents to plan and reason in dynamic environments. Recent advancements, particularly with video diffusion models, have shown impressive capabilities in generating realistic future sequences. However, a significant bottleneck remains: maintaining long-term memory. Current models struggle to remember events and states from far in the past due to the high computational cost associated with processing extended sequences using traditional attention layers. This limits their ability to perform complex tasks requiring sustained understanding of a scene.
A new paper, “Long-Context State-Space Video World Models” by researchers from Stanford University, Princeton University, and Adobe Research, proposes an innovative solution to this challenge. They introduce a novel architecture that leverages State-Space Models (SSMs) to extend temporal memory without sacrificing computational efficiency.
The core problem lies in the quadratic computational complexity of attention mechanisms with respect to sequence length. As the video context grows, the resources required for attention layers explode, making long-term memory impractical for real-world applications. This means that after a certain number of frames, the model effectively “forgets” earlier events, hindering its performance on tasks that demand long-range coherence or reasoning over extended periods.
The authors’ key insight is to leverage the inherent strengths of State-Space Models (SSMs) for causal sequence modeling. Unlike previous attempts that retrofitted SSMs for non-causal vision tasks, this work fully exploits their advantages in processing sequences efficiently.
The proposed Long-Context State-Space Video World Model (LSSVWM) incorporates several crucial design choices:
- Block-wise SSM Scanning Scheme: This is central to their design. Instead of processing the entire video sequence with a single SSM scan, they employ a block-wise scheme. This strategically trades off some spatial consistency (within a block) for significantly extended temporal memory. By breaking down the long sequence into manageable blocks, they can maintain a compressed “state” that carries information across blocks, effectively extending the model’s memory horizon.
- Dense Local Attention: To compensate for the potential loss of spatial coherence introduced by the block-wise SSM scanning, the model incorporates dense local attention. This ensures that consecutive frames within and across blocks maintain strong relationships, preserving the fine-grained details and consistency necessary for realistic video generation. This dual approach of global (SSM) and local (attention) processing allows them to achieve both long-term memory and local fidelity.
The paper also introduces two key training strategies to further improve long-context performance:
- Diffusion Forcing: This technique encourages the model to generate frames conditioned on a prefix of the input, effectively forcing it to learn to maintain consistency over longer durations. By sometimes not sampling a prefix and keeping all tokens noised, the training becomes equivalent to diffusion forcing, which is highlighted as a special case of long-context training where the prefix length is zero. This pushes the model to generate coherent sequences even from minimal initial context.
- Frame Local Attention: For faster training and sampling, the authors implemented a “frame local attention” mechanism. This utilizes FlexAttention to achieve significant speedups compared to a fully causal mask. By grouping frames into chunks (e.g., chunks of 5 with a frame window size of 10), frames within a chunk maintain bidirectionality while also attending to frames in the previous chunk. This allows for an effective receptive field while optimizing computational load.
The researchers evaluated their LSSVWM on challenging datasets, including Memory Maze and Minecraft, which are specifically designed to test long-term memory capabilities through spatial retrieval and reasoning tasks.
The experiments demonstrate that their approach substantially surpasses baselines in preserving long-range memory. Qualitative results, as shown in supplementary figures (e.g., S1, S2, S3), illustrate that LSSVWM can generate more coherent and accurate sequences over extended periods compared to models relying solely on causal attention or even Mamba2 without frame local attention. For instance, on reasoning tasks for the maze dataset, their model maintains better consistency and accuracy over long horizons. Similarly, for retrieval tasks, LSSVWM shows improved ability to recall and utilize information from distant past frames. Crucially, these improvements are achieved while maintaining practical inference speeds, making the models suitable for interactive applications.
The Paper Long-Context State-Space Video World Models is on arXiv
The post Adobe Research Unlocking Long-Term Memory in Video World Models with State-Space Models first appeared on Synced.
🔗 Sumber: syncedreview.com
📌 MAROKO133 Eksklusif ai: Meta’s Top AI Scientist Is Quitting as Zuckerberg’s Spen
A major shake up appears to be taking place at Meta.
According to new reporting from the Financial Times, the company’s top AI scientist, Yann LeCun, is planning to leave his position there and start his own AI startup.
It’s a genuinely startling decision. LeCun, who joined Meta in 2013, is a towering figure in the AI industry. A Turing Award winner, the 65-year-old is considered to be one of the three so-called “godfathers” of modern AI for his pioneering work on neural networks, the tech that underpins the large language models used by much of the AI industry. As such, his presence at Meta has imbued the company’s often-struggling AI efforts with an aura of credibility.
LeCun’s departure comes as CEO Mark Zuckerberg has carried out sweeping changes to Meta’s approach to AI. Whereas before the company had a heavy focus on research and open source models, it’s now retooling its efforts to focus more on fielding commercially competitive AI products, as its AI chatbots lags behind rivals, like Google and OpenAI.Â
LeCun is famously something of an LLM skeptic, believing that the architecture is incapable of one day achieving human-levels of cognition, resulting in a so-called artificial general intelligence. He even advised up-and-coming programmers to not pursue LLMs at all, and instead work “on next-gen AI systems that lift the limitations of LLMs.” That makes him an outlier in the industry, as one of the driving promises fueling the boom is that the tech provides a direct line to creating AGI, if it isn’t already on the verge of doing so. With his focus on more esoteric forms of AI and his distaste of AI boosterism, LeCun was always an odd figure to be working at a titan like Meta.
Perhaps these contradictions finally became irreconcilable. This summer, Meta poured over $14 billion in the AI data annotation startup Scale AI and poached its then-CEO Alexandr Wang to lead a newly created Superintelligence Labs. Separate from LeCun’s research division, FAIR, it aims to create a “superintelligent” AI using LLM technology. LeCun, by contrast, is adamant about creating “world” models that are designed to understand the three-dimensional world by training them on a variety of physical data, rather than only language. LeCun says these advances could take decades, but Zuckerberg is clearly obsessed with market dominance in the immediate term.
It’s also evident now which of the AI approaches are the favorite child. Zuckerberg has tried to bring in talent to the superintelligence division by offering astronomical contracts worth hundreds of millions of dollars. LeCun was also forced to start reporting to the 28-year-old Wang once he joined Meta, the FT noted.
The change of gears to developing “superintelligence” came after Meta’s newest Llama 4 model was widely considered a disappointment with its lackluster performance compared to competitors’ efforts.
The market, however, hasn’t been fully confident with Zuckerberg’s new direction. Last month, its stock plunged by more than 11 percent following a third quarter earnings call in which it projected spending billions of dollars more on AI this year than earlier guidance suggested. Following the report of LeCun’s planned departure, Meta’s stock dipped by nearly another 3 percent.
More on Meta: Meta Accuses Employee’s Dad of Downloading Gigantic Illegal Goon Stash
The post Meta’s Top AI Scientist Is Quitting as Zuckerberg’s Spending Spree Sputters appeared first on Futurism.
🔗 Sumber: futurism.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!
