MAROKO133 Update ai: MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI E

📌 MAROKO133 Breaking ai: MIT Researchers Unveil “SEAL”: A New Step Towards Self-Im

The concept of AI self-improvement has been a hot topic in recent research circles, with a flurry of papers emerging and prominent figures like OpenAI CEO Sam Altman weighing in on the future of self-evolving intelligent systems. Now, a new paper from MIT, titled “Self-Adapting Language Models,” introduces SEAL (Self-Adapting LLMs), a novel framework that allows large language models (LLMs) to update their own weights. This development is seen as another significant step towards the realization of truly self-evolving AI.

The research paper, published yesterday, has already ignited considerable discussion, including on Hacker News. SEAL proposes a method where an LLM can generate its own training data through “self-editing” and subsequently update its weights based on new inputs. Crucially, this self-editing process is learned via reinforcement learning, with the reward mechanism tied to the updated model’s downstream performance.

The timing of this paper is particularly notable given the recent surge in interest surrounding AI self-evolution. Earlier this month, several other research efforts garnered attention, including Sakana AI and the University of British Columbia’s “Darwin-Gödel Machine (DGM),” CMU’s “Self-Rewarding Training (SRT),” Shanghai Jiao Tong University’s “MM-UPT” framework for continuous self-improvement in multimodal large models, and the “UI-Genie” self-improvement framework from The Chinese University of Hong Kong in collaboration with vivo.

Adding to the buzz, OpenAI CEO Sam Altman recently shared his vision of a future with self-improving AI and robots in his blog post, “The Gentle Singularity.” He posited that while the initial millions of humanoid robots would need traditional manufacturing, they would then be able to “operate the entire supply chain to build more robots, which can in turn build more chip fabrication facilities, data centers, and so on.” This was quickly followed by a tweet from @VraserX, claiming an OpenAI insider revealed the company was already running recursively self-improving AI internally, a claim that sparked widespread debate about its veracity.

Regardless of the specifics of internal OpenAI developments, the MIT paper on SEAL provides concrete evidence of AI’s progression towards self-evolution.

Understanding SEAL: Self-Adapting Language Models

The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context.

The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task. Therefore, SEAL can be conceptualized as an algorithm with two nested loops: an outer reinforcement learning (RL) loop that optimizes the generation of self-edits, and an inner update loop that uses the generated self-edits to update the model via gradient descent.

This method can be viewed as an instance of meta-learning, where the focus is on how to generate effective self-edits in a meta-learning fashion.

A General Framework

SEAL operates on a single task instance (C,τ), where C is context information relevant to the task, and τ defines the downstream evaluation for assessing the model’s adaptation. For example, in a knowledge integration task, C might be a passage to be integrated into the model’s internal knowledge, and τ a set of questions about that passage.

Given C, the model generates a self-edit SE, which then updates its parameters through supervised fine-tuning: θ′←SFT(θ,SE). Reinforcement learning is used to optimize this self-edit generation: the model performs an action (generates SE), receives a reward r based on LMθ′’s performance on τ, and updates its policy to maximize the expected reward.

The researchers found that traditional online policy methods like GRPO and PPO led to unstable training. They ultimately opted for ReST^EM, a simpler, filtering-based behavioral cloning approach from a DeepMind paper. This method can be viewed as an Expectation-Maximization (EM) process, where the E-step samples candidate outputs from the current model policy, and the M-step reinforces only those samples that yield a positive reward through supervised fine-tuning.

The paper also notes that while the current implementation uses a single model to generate and learn from self-edits, these roles could be separated in a “teacher-student” setup.

Instantiating SEAL in Specific Domains

The MIT team instantiated SEAL in two specific domains: knowledge integration and few-shot learning.

  • Knowledge Integration: The goal here is to effectively integrate information from articles into the model’s weights.
  • Few-Shot Learning: This involves the model adapting to new tasks with very few examples.

Experimental Results

The experimental results for both few-shot learning and knowledge integration demonstrate the effectiveness of the SEAL framework.

In few-shot learning, using a Llama-3.2-1B-Instruct model, SEAL significantly improved adaptation success rates, achieving 72.5% compared to 20% for models using basic self-edits without RL training, and 0% without adaptation. While still below “Oracle TTT” (an idealized baseline), this indicates substantial progress.

For knowledge integration, using a larger Qwen2.5-7B model to integrate new facts from SQuAD articles, SEAL consistently outperformed baseline methods. Training with synthetically generated data from the base Qwen-2.5-7B model already showed notable improvements, and subsequent reinforcement learning further boosted performance. The accuracy also showed rapid improvement over external RL iterations, often surpassing setups using GPT-4.1 generated data within just two iterations.

Qualitative examples from the paper illustrate how reinforcement learning leads to the generation of more detailed self-edits, resulting in improved performance.

While promising, the researchers also acknowledge some limitations of the SEAL framework, including aspects related to catastrophic forgetting, computational overhead, and context-dependent evaluation. These are discussed in detail in the original paper.

Original Paper: https://arxiv.org/pdf/2506.10943

Project Site: https://jyopari.github.io/posts/seal

Github Repo: https://github.com/Continual-Intelligence/SEAL

The post MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI first appeared on Synced.

🔗 Sumber: syncedreview.com


📌 MAROKO133 Update ai: Chinese defense companies hint at new long-range military s

Two Chinese defense companies have issued vague statements that together point to a possible new long-distance shooting record, although neither has provided technical specifications or independently verifiable data.

The first indication emerged at the end of last month, when Chongqing Changjiang Electric Appliances Industries Group, one of China’s largest ammunition manufacturers, said in an announcement that an unspecified product had “successfully refreshed the world record for similar products” during what it described as a “specialised test”.

A day later, Hunan Huanan OptoElectronic Group, a supplier of military-grade optics, released a brief statement of its own. It said its product had been used in a “sniper-specific test” and had “again supported the system in refreshing a world record in the same field”, further fueling speculation that a new benchmark may have been set in long-range precision shooting.

New claims revive debate over extreme long-distance accuracy

The company highlighted the optic’s “sharp imaging” and “rock-solid optical axis”, but did not disclose any specifics about the test conditions, the alleged record, or which exact product was involved, the South China Morning Post reports.

A possible clue comes from last year, when Hunan Huanan OptoElectronic reported that the domestically developed CS/LR24 rifle successfully hit five out of five targets at a distance of about 9,898 feet. That performance was described as a record for a rifle using 8.6mm ammunition, a caliber class historically dominated by British and American systems.

The 8.6mm round corresponds to the Western .338 caliber, which sits between NATO’s standard 7.62×51mm round and the much larger .50 BMG, and is often used in long-range precision shooting roles where balance between recoil, range, and terminal performance is critical.

The confirmed sniper kill distance record stands at 13,123 feet, set in August last year by a sniper from Ukraine’s “Pryvid” unit, reportedly using a larger caliber than 8.6mm ammunition. Following recent statements from two Chinese defense companies, online forums have speculated that the system involved may have exceeded 11,483 feet.

The factors behind extreme-range sniper capability

A Chinese military equipment expert, speaking on condition of anonymity due to the sensitivity of the subject, explained that the lack of disclosure reflects the strategic value of weapons performance data and why such results are rarely made fully public. According to the expert, extreme-range sniper capability cannot be attributed to a single technological improvement or platform alone.

Instead, it emerges from the integration of multiple tightly linked systems, including ammunition consistency, ballistic stability, optical precision, environmental compensation, and shooter training. Each component must perform within narrow tolerances, particularly at extended distances where minor deviations can significantly affect accuracy.

Furthermore, releasing precise data on a rifle’s maximum range would have provided foreign analysts with a clear benchmark for assessing Chinese long-distance engagement capabilities, the equipment specialist said.

The expert added that such announcements are likely not primarily intended for external audiences, but instead serve internal objectives, including boosting morale, securing industrial recognition, and signaling performance achievements within China’s military procurement system.

🔗 Sumber: interestingengineering.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna