📌 MAROKO133 Breaking ai: MIT Researchers Unveil “SEAL”: A New Step Towards Self-Im
The concept of AI self-improvement has been a hot topic in recent research circles, with a flurry of papers emerging and prominent figures like OpenAI CEO Sam Altman weighing in on the future of self-evolving intelligent systems. Now, a new paper from MIT, titled “Self-Adapting Language Models,” introduces SEAL (Self-Adapting LLMs), a novel framework that allows large language models (LLMs) to update their own weights. This development is seen as another significant step towards the realization of truly self-evolving AI.
The research paper, published yesterday, has already ignited considerable discussion, including on Hacker News. SEAL proposes a method where an LLM can generate its own training data through “self-editing” and subsequently update its weights based on new inputs. Crucially, this self-editing process is learned via reinforcement learning, with the reward mechanism tied to the updated model’s downstream performance.
The timing of this paper is particularly notable given the recent surge in interest surrounding AI self-evolution. Earlier this month, several other research efforts garnered attention, including Sakana AI and the University of British Columbia’s “Darwin-Gödel Machine (DGM),” CMU’s “Self-Rewarding Training (SRT),” Shanghai Jiao Tong University’s “MM-UPT” framework for continuous self-improvement in multimodal large models, and the “UI-Genie” self-improvement framework from The Chinese University of Hong Kong in collaboration with vivo.
Adding to the buzz, OpenAI CEO Sam Altman recently shared his vision of a future with self-improving AI and robots in his blog post, “The Gentle Singularity.” He posited that while the initial millions of humanoid robots would need traditional manufacturing, they would then be able to “operate the entire supply chain to build more robots, which can in turn build more chip fabrication facilities, data centers, and so on.” This was quickly followed by a tweet from @VraserX, claiming an OpenAI insider revealed the company was already running recursively self-improving AI internally, a claim that sparked widespread debate about its veracity.
Regardless of the specifics of internal OpenAI developments, the MIT paper on SEAL provides concrete evidence of AI’s progression towards self-evolution.
Understanding SEAL: Self-Adapting Language Models
The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context.
The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task. Therefore, SEAL can be conceptualized as an algorithm with two nested loops: an outer reinforcement learning (RL) loop that optimizes the generation of self-edits, and an inner update loop that uses the generated self-edits to update the model via gradient descent.
This method can be viewed as an instance of meta-learning, where the focus is on how to generate effective self-edits in a meta-learning fashion.
A General Framework
SEAL operates on a single task instance (C,τ), where C is context information relevant to the task, and τ defines the downstream evaluation for assessing the model’s adaptation. For example, in a knowledge integration task, C might be a passage to be integrated into the model’s internal knowledge, and τ a set of questions about that passage.
Given C, the model generates a self-edit SE, which then updates its parameters through supervised fine-tuning: θ′←SFT(θ,SE). Reinforcement learning is used to optimize this self-edit generation: the model performs an action (generates SE), receives a reward r based on LMθ′’s performance on τ, and updates its policy to maximize the expected reward.
The researchers found that traditional online policy methods like GRPO and PPO led to unstable training. They ultimately opted for ReST^EM, a simpler, filtering-based behavioral cloning approach from a DeepMind paper. This method can be viewed as an Expectation-Maximization (EM) process, where the E-step samples candidate outputs from the current model policy, and the M-step reinforces only those samples that yield a positive reward through supervised fine-tuning.
The paper also notes that while the current implementation uses a single model to generate and learn from self-edits, these roles could be separated in a “teacher-student” setup.
Instantiating SEAL in Specific Domains
The MIT team instantiated SEAL in two specific domains: knowledge integration and few-shot learning.
- Knowledge Integration: The goal here is to effectively integrate information from articles into the model’s weights.
- Few-Shot Learning: This involves the model adapting to new tasks with very few examples.
Experimental Results
The experimental results for both few-shot learning and knowledge integration demonstrate the effectiveness of the SEAL framework.
In few-shot learning, using a Llama-3.2-1B-Instruct model, SEAL significantly improved adaptation success rates, achieving 72.5% compared to 20% for models using basic self-edits without RL training, and 0% without adaptation. While still below “Oracle TTT” (an idealized baseline), this indicates substantial progress.
For knowledge integration, using a larger Qwen2.5-7B model to integrate new facts from SQuAD articles, SEAL consistently outperformed baseline methods. Training with synthetically generated data from the base Qwen-2.5-7B model already showed notable improvements, and subsequent reinforcement learning further boosted performance. The accuracy also showed rapid improvement over external RL iterations, often surpassing setups using GPT-4.1 generated data within just two iterations.
Qualitative examples from the paper illustrate how reinforcement learning leads to the generation of more detailed self-edits, resulting in improved performance.
While promising, the researchers also acknowledge some limitations of the SEAL framework, including aspects related to catastrophic forgetting, computational overhead, and context-dependent evaluation. These are discussed in detail in the original paper.
Original Paper: https://arxiv.org/pdf/2506.10943
Project Site: https://jyopari.github.io/posts/seal
Github Repo: https://github.com/Continual-Intelligence/SEAL
The post MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI first appeared on Synced.
🔗 Sumber: syncedreview.com
📌 MAROKO133 Eksklusif ai: Why AI coding agents aren’t production-ready: Brittle co
Remember this Quora comment (which also became a meme)?
(Source: Quora)
In the pre-large language model (LLM) Stack Overflow era, the challenge was discerning which code snippets to adopt and adapt effectively. Now, while generating code has become trivially easy, the more profound challenge lies in reliably identifying and integrating high-quality, enterprise-grade code into production environments.
This article will examine the practical pitfalls and limitations observed when engineers use modern coding agents for real enterprise work, addressing the more complex issues around integration, scalability, accessibility, evolving security practices, data privacy and maintainability in live operational settings. We hope to balance out the hype and provide a more technically-grounded view of the capabilities of AI coding agents.
Limited domain understanding and service limits
AI agents struggle significantly with designing scalable systems due to the sheer explosion of choices and a critical lack of enterprise-specific context. To describe the problem in broad strokes, large enterprise codebases and monorepos are often too vast for agents to directly learn from, and crucial knowledge can be frequently fragmented across internal documentation and individual expertise.
More specifically, many popular coding agents encounter service limits that hinder their effectiveness in large-scale environments. Indexing features may fail or degrade in quality for repositories exceeding 2,500 files, or due to memory constraints. Furthermore, files larger than 500 KB are often excluded from indexing/search, which impacts established products with decades-old, larger code files (although newer projects may admittedly face this less frequently).
For complex tasks involving extensive file contexts or refactoring, developers are expected to provide the relevant files and while also explicitly defining the refactoring procedure and the surrounding build/command sequences to validate the implementation without introducing feature regressions.
Lack of hardware context and usage
AI agents have demonstrated a critical lack of awareness regarding OS machine, command-line and environment installations (conda/venv). This deficiency can lead to frustrating experiences, such as the agent attempting to execute Linux commands on PowerShell, which can consistently result in ‘unrecognized command’ errors. Furthermore, agents frequently exhibit inconsistent ‘wait tolerance’ on reading command outputs, prematurely declaring an inability to read results (and moving ahead to either retry/skip) before a command has even finished, especially on slower machines.
This isn't merely about nitpicking features; rather, the devil is in these practical details. These experience gaps manifest as real points of friction and necessitate constant human vigilance to monitor the agent’s activity in real-time. Otherwise, the agent might ignore initial tool call information and either stop prematurely, or proceed with a half-baked solution requiring undoing some/all changes, re-triggering prompts and wasting tokens. Submitting a prompt on a Friday evening and expecting the code updates to be done when checking on Monday morning is not guaranteed.
Hallucinations over repeated actions
Working with AI coding agents often presents a longstanding challenge of hallucinations, or incorrect or incomplete pieces of information (such as small code snippets) within a larger set of changesexpected to be fixed by a developer with trivial-to-low effort. However, what becomes particularly problematic is when incorrect behavior is repeated within a single thread, forcing users to either start a new thread and re-provide all context, or intervene manually to “unblock” the agent.
For instance, during a Python Function code setup, an agent tasked with implementing complex production-readiness changes encountered a file (see below) containing special characters (parentheses, period, star). These characters are very common in computer science to denote software versions.
(Image created manually with boilerplate code. Source: Microsoft Learn and Editing Application Host File (host.json) in Azure Portal)
The agent incorrectly flagged this as an unsafe or harmful value, halting the entire generation process. This misidentification of an adversarial attack recurred 4 to 5 times despite various prompts attempting to restart or continue the modification. This version format is in-fact boilerplate, present in a Python HTTP-trigger code template. The only successful workaround involved instructing the agent to not read the file, and instead request it to simply provide the desired configuration and assure it that the developer will manually add it to that file, confirm and ask it to continue with remaining code changes.
The inability to exit a repeatedly faulty agent output loop within the same thread highlights a practical limitation that significantly wastes development time. In essence, developers tend to now spend time on debugging/refining AI-generated code rather than Stack Overflow code snippets or their own.
Lack of enterprise-grade coding practices
Security best practices: Coding agents often default to less secure authentication methods like key-based authentication (client secrets) rather than modern identity-based solutions (such as Entra ID or federated credentials). This oversight can introduce significant vulnerabilities and increase maintenance overhead, as key management and rotation are complex tasks increasingly restricted in enterprise environments.
Outdated SDKs and reinventing the wheel: Agents may not consistently leverage the latest SDK methods, instead generating more verbose and harder-to-maintain implementations. Piggybacking on the Azure Function example, agents have outputted code using the pre-existing v1 SDK for read/write operations, rather than the much cleaner and more maintainable v2 SDK code. Developers must research the latest best practices online to have a mental map of dependencies and expected implementation that ensures long-term maintainability and reduces upcoming tech migration efforts.
Limited intent recognition and repetitive code: Even for smaller-scoped, modular tasks (which are typically encouraged to minimize hallucinations or debugging downtime) like extending an existing function definition, agents may follow the instruction literally and produce logic that turns out to be near-repetitive, without anticipating the upcoming or unarticulated needs of the developer. That is, in these modular tasks the agent may not automatically identify and refactor similar logic into shared functions or improve class definitions, leading to tech debt and harder-to-manage codebases especially with vibe coding or lazy developers.
Simply put, those viral YouTube reels showcasing rapid zero-to-one app development from a single-sentence prompt simply fail to capture the nuanced challenges of …
Konten dipersingkat otomatis.
🔗 Sumber: venturebeat.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!