📌 MAROKO133 Update ai: MIT Researchers Unveil “SEAL”: A New Step Towards Self-Impr
The concept of AI self-improvement has been a hot topic in recent research circles, with a flurry of papers emerging and prominent figures like OpenAI CEO Sam Altman weighing in on the future of self-evolving intelligent systems. Now, a new paper from MIT, titled “Self-Adapting Language Models,” introduces SEAL (Self-Adapting LLMs), a novel framework that allows large language models (LLMs) to update their own weights. This development is seen as another significant step towards the realization of truly self-evolving AI.
The research paper, published yesterday, has already ignited considerable discussion, including on Hacker News. SEAL proposes a method where an LLM can generate its own training data through “self-editing” and subsequently update its weights based on new inputs. Crucially, this self-editing process is learned via reinforcement learning, with the reward mechanism tied to the updated model’s downstream performance.
The timing of this paper is particularly notable given the recent surge in interest surrounding AI self-evolution. Earlier this month, several other research efforts garnered attention, including Sakana AI and the University of British Columbia’s “Darwin-Gödel Machine (DGM),” CMU’s “Self-Rewarding Training (SRT),” Shanghai Jiao Tong University’s “MM-UPT” framework for continuous self-improvement in multimodal large models, and the “UI-Genie” self-improvement framework from The Chinese University of Hong Kong in collaboration with vivo.
Adding to the buzz, OpenAI CEO Sam Altman recently shared his vision of a future with self-improving AI and robots in his blog post, “The Gentle Singularity.” He posited that while the initial millions of humanoid robots would need traditional manufacturing, they would then be able to “operate the entire supply chain to build more robots, which can in turn build more chip fabrication facilities, data centers, and so on.” This was quickly followed by a tweet from @VraserX, claiming an OpenAI insider revealed the company was already running recursively self-improving AI internally, a claim that sparked widespread debate about its veracity.
Regardless of the specifics of internal OpenAI developments, the MIT paper on SEAL provides concrete evidence of AI’s progression towards self-evolution.
Understanding SEAL: Self-Adapting Language Models
The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context.
The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task. Therefore, SEAL can be conceptualized as an algorithm with two nested loops: an outer reinforcement learning (RL) loop that optimizes the generation of self-edits, and an inner update loop that uses the generated self-edits to update the model via gradient descent.
This method can be viewed as an instance of meta-learning, where the focus is on how to generate effective self-edits in a meta-learning fashion.
A General Framework
SEAL operates on a single task instance (C,τ), where C is context information relevant to the task, and τ defines the downstream evaluation for assessing the model’s adaptation. For example, in a knowledge integration task, C might be a passage to be integrated into the model’s internal knowledge, and τ a set of questions about that passage.
Given C, the model generates a self-edit SE, which then updates its parameters through supervised fine-tuning: θ′←SFT(θ,SE). Reinforcement learning is used to optimize this self-edit generation: the model performs an action (generates SE), receives a reward r based on LMθ′’s performance on τ, and updates its policy to maximize the expected reward.
The researchers found that traditional online policy methods like GRPO and PPO led to unstable training. They ultimately opted for ReST^EM, a simpler, filtering-based behavioral cloning approach from a DeepMind paper. This method can be viewed as an Expectation-Maximization (EM) process, where the E-step samples candidate outputs from the current model policy, and the M-step reinforces only those samples that yield a positive reward through supervised fine-tuning.
The paper also notes that while the current implementation uses a single model to generate and learn from self-edits, these roles could be separated in a “teacher-student” setup.
Instantiating SEAL in Specific Domains
The MIT team instantiated SEAL in two specific domains: knowledge integration and few-shot learning.
- Knowledge Integration: The goal here is to effectively integrate information from articles into the model’s weights.
- Few-Shot Learning: This involves the model adapting to new tasks with very few examples.
Experimental Results
The experimental results for both few-shot learning and knowledge integration demonstrate the effectiveness of the SEAL framework.
In few-shot learning, using a Llama-3.2-1B-Instruct model, SEAL significantly improved adaptation success rates, achieving 72.5% compared to 20% for models using basic self-edits without RL training, and 0% without adaptation. While still below “Oracle TTT” (an idealized baseline), this indicates substantial progress.
For knowledge integration, using a larger Qwen2.5-7B model to integrate new facts from SQuAD articles, SEAL consistently outperformed baseline methods. Training with synthetically generated data from the base Qwen-2.5-7B model already showed notable improvements, and subsequent reinforcement learning further boosted performance. The accuracy also showed rapid improvement over external RL iterations, often surpassing setups using GPT-4.1 generated data within just two iterations.
Qualitative examples from the paper illustrate how reinforcement learning leads to the generation of more detailed self-edits, resulting in improved performance.
While promising, the researchers also acknowledge some limitations of the SEAL framework, including aspects related to catastrophic forgetting, computational overhead, and context-dependent evaluation. These are discussed in detail in the original paper.
Original Paper: https://arxiv.org/pdf/2506.10943
Project Site: https://jyopari.github.io/posts/seal
Github Repo: https://github.com/Continual-Intelligence/SEAL
The post MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI first appeared on Synced.
🔗 Sumber: syncedreview.com
📌 MAROKO133 Hot ai: Meta Adding Facial Recognition to Its Smart Glasses That Ident
When Meta announced it would strip its failed VR goggles division for parts, the bet was simple: funnel that money into sleek, AI-powered smart glasses instead. Emboldened by the product’s early success, the company is now working on rolling out a massive facial recognition feature across its entire smart glasses platform — a launch which involves timing the announcement with political drama to minimize scrutiny.
According to new reporting by the New York Times, Meta could make the facial recognition features available to smart glasses owners as early as this year. Internally, the software goes by the designation “Name Tag.” Per the NYT‘s sources, it would let anyone who owns Meta’s smart glasses to identify people in the real world, instantly pulling up their information through Meta’s AI assistant.
Since early 2025, NYT notes, Meta insiders have been hemming and hawing over how to roll out the feature, acknowledging significant “safety and privacy risks” associated with the feature.
Disturbingly, documents viewed by the paper reportedly show the company planning to wash its product launch through the disabled community. That never came to pass, though it evidently would have involved introducing Name Tag as an accessibility feature at a conference for blind users before unleashing it on the public.
The same documents likewise argued that domestic political turmoil across the US in May of 2025 — this was in the early days of Trump’s deportation campaign, Elon Musk’s DOGE agenda, and more — would made an appealing time window for the release of such a controversial feature because the public would be too burned out to notice or care.
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the memo read, per the NYT.
In a statement, Meta told the paper they’re “building products that help millions of people connect and enrich their lives. While we frequently hear about the interest in this type of feature — and some products already exist in the market — we’re still thinking through options and will take a thoughtful approach if and before we roll anything out.”
If they do, the privacy nightmare Meta’s smart glasses already represent will only get worse. As American Civil Liberties Union deputy director Nathan Freed Wessler told the NYT: “Face recognition technology on the streets of America poses a uniquely dire threat to the practical anonymity we all rely on. This technology is ripe for abuse.”
More on Meta: Court Filing Reveals Something Very Nasty About Mark Zuckerberg
The post Meta Adding Facial Recognition to Its Smart Glasses That Identifies People in Real Time, Hoping the Public Is Too Distracted by Political Turmoil to Care appeared first on Futurism.
🔗 Sumber: futurism.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!