MAROKO133 Breaking ai: NASA Scientists Screamed With Delight When They Saw Something Smash

📌 MAROKO133 Hot ai: NASA Scientists Screamed With Delight When They Saw Something

As NASA’s Artemis 2 crew careened around the far side of the Moon earlier this week — breaking the record for how far humans have ever traveled from Earth in the process — they were treated to an incredible view.

As they cruised past the Moon’s heavily-crated far side, the astronauts watched in amazement as micrometeorites impacted the lunar surface, catching both them and mission control off guard. The mission’s crew said they witnessed at least six impacts on the lunar far side during the almost-one-hour-long total solar eclipse as the Sun went out of view behind the Earth from their perspective.

There were “audible screams of delight” at Mission Control in Houston, as mission science lead Kelsey Young recalled during a Tuesday press conference.

“There was a little giddiness,” NASA astronaut and commander Reid Wiseman told Houston during the observation period. “We have seen three impact flashes so far. I saw two, and [mission specialist Jeremy Hansen] has seen one.”

“Undoubtedly quick impact flashes,” he said, adding that “it was not Sun glint off a particulate from the thrusters or the burns tanks.”

“And Jeremy just saw another one,” Wiseman added.

The face on Young said it all. The livestream showed her jaw hitting the floor as she looked around the room at Mission Control in disbelief.

“I don’t know if I expected to have the crew see any on this mission, so you probably saw the surprise and shock on my face,” she later recalled.

While the team said that they already got what they came for — astonishing close-up views of the lunar surface and its unusual geographical features — the constant bombardment of tiny meteorites was unexpected.

“This is absolutely everything we hoped for by integrating science into flight operations,” Young told reporters. “Science enables exploration, and exploration enables science.”

“They were really high-priority science for us, so the fact that they saw four or five was just outstanding,” Canadian backup astronaut Jenni Gibbons told Agence France-Presse.

Micrometeorites are already a major point of discussion as the United States continues to push for the establishment of a permanent settlement on the Moon. Besides “moonquakes” and massive amounts of space radiation, future astronauts will need to have sufficient shelter to protect them from these errant space rocks.

Without a protective atmosphere like the Earth’s, which acts as a shield and causes most meteorites to burn up, the Moon is largely exposed, as evidenced in its crater-riddled appearance.

Even if the fragments are extremely small, they can still impact the surface with a huge amount of force while traveling tens of miles per second. In other words, future lunar habitats will quite literally be bulletproof.

In a 2025 study, scientists used NASA’s Meteoroid Engineering Model to calculate impact rates for a hypothetical lunar base the size of the International Space Station. They found that between 15,000 and 23,000 particles, ranging from a millionth of a gram to ten grams, could strike such a habitat per year. (It’s unclear whether the latest first-hand observations could influence their findings.)

However, the researchers identified some areas — including the lunar south pole, which NASA is eyeing for its first Artemis base — as being less battered.

Another possibility is to seek shelter inside deeper craters or caves left behind by lava tubes to shield both from meteorites and space radiation, an idea we’ve only begun to explore.

More on the mission: We’re In Utter Disbelief About the Photos the Moon Astronauts Just Sent Back

The post NASA Scientists Screamed With Delight When They Saw Something Smashing Into the Moon appeared first on Futurism.

🔗 Sumber: futurism.com


📌 MAROKO133 Hot ai: MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improvi

The concept of AI self-improvement has been a hot topic in recent research circles, with a flurry of papers emerging and prominent figures like OpenAI CEO Sam Altman weighing in on the future of self-evolving intelligent systems. Now, a new paper from MIT, titled “Self-Adapting Language Models,” introduces SEAL (Self-Adapting LLMs), a novel framework that allows large language models (LLMs) to update their own weights. This development is seen as another significant step towards the realization of truly self-evolving AI.

The research paper, published yesterday, has already ignited considerable discussion, including on Hacker News. SEAL proposes a method where an LLM can generate its own training data through “self-editing” and subsequently update its weights based on new inputs. Crucially, this self-editing process is learned via reinforcement learning, with the reward mechanism tied to the updated model’s downstream performance.

The timing of this paper is particularly notable given the recent surge in interest surrounding AI self-evolution. Earlier this month, several other research efforts garnered attention, including Sakana AI and the University of British Columbia’s “Darwin-Gödel Machine (DGM),” CMU’s “Self-Rewarding Training (SRT),” Shanghai Jiao Tong University’s “MM-UPT” framework for continuous self-improvement in multimodal large models, and the “UI-Genie” self-improvement framework from The Chinese University of Hong Kong in collaboration with vivo.

Adding to the buzz, OpenAI CEO Sam Altman recently shared his vision of a future with self-improving AI and robots in his blog post, “The Gentle Singularity.” He posited that while the initial millions of humanoid robots would need traditional manufacturing, they would then be able to “operate the entire supply chain to build more robots, which can in turn build more chip fabrication facilities, data centers, and so on.” This was quickly followed by a tweet from @VraserX, claiming an OpenAI insider revealed the company was already running recursively self-improving AI internally, a claim that sparked widespread debate about its veracity.

Regardless of the specifics of internal OpenAI developments, the MIT paper on SEAL provides concrete evidence of AI’s progression towards self-evolution.

Understanding SEAL: Self-Adapting Language Models

The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context.

The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task. Therefore, SEAL can be conceptualized as an algorithm with two nested loops: an outer reinforcement learning (RL) loop that optimizes the generation of self-edits, and an inner update loop that uses the generated self-edits to update the model via gradient descent.

This method can be viewed as an instance of meta-learning, where the focus is on how to generate effective self-edits in a meta-learning fashion.

A General Framework

SEAL operates on a single task instance (C,τ), where C is context information relevant to the task, and τ defines the downstream evaluation for assessing the model’s adaptation. For example, in a knowledge integration task, C might be a passage to be integrated into the model’s internal knowledge, and τ a set of questions about that passage.

Given C, the model generates a self-edit SE, which then updates its parameters through supervised fine-tuning: θ′←SFT(θ,SE). Reinforcement learning is used to optimize this self-edit generation: the model performs an action (generates SE), receives a reward r based on LMθ′’s performance on τ, and updates its policy to maximize the expected reward.

The researchers found that traditional online policy methods like GRPO and PPO led to unstable training. They ultimately opted for ReST^EM, a simpler, filtering-based behavioral cloning approach from a DeepMind paper. This method can be viewed as an Expectation-Maximization (EM) process, where the E-step samples candidate outputs from the current model policy, and the M-step reinforces only those samples that yield a positive reward through supervised fine-tuning.

The paper also notes that while the current implementation uses a single model to generate and learn from self-edits, these roles could be separated in a “teacher-student” setup.

Instantiating SEAL in Specific Domains

The MIT team instantiated SEAL in two specific domains: knowledge integration and few-shot learning.

  • Knowledge Integration: The goal here is to effectively integrate information from articles into the model’s weights.
  • Few-Shot Learning: This involves the model adapting to new tasks with very few examples.

Experimental Results

The experimental results for both few-shot learning and knowledge integration demonstrate the effectiveness of the SEAL framework.

In few-shot learning, using a Llama-3.2-1B-Instruct model, SEAL significantly improved adaptation success rates, achieving 72.5% compared to 20% for models using basic self-edits without RL training, and 0% without adaptation. While still below “Oracle TTT” (an idealized baseline), this indicates substantial progress.

For knowledge integration, using a larger Qwen2.5-7B model to integrate new facts from SQuAD articles, SEAL consistently outperformed baseline methods. Training with synthetically generated data from the base Qwen-2.5-7B model already showed notable improvements, and subsequent reinforcement learning further boosted performance. The accuracy also showed rapid improvement over external RL iterations, often surpassing setups using GPT-4.1 generated data within just two iterations.

Qualitative examples from the paper illustrate how reinforcement learning leads to the generation of more detailed self-edits, resulting in improved performance.

While promising, the researchers also acknowledge some limitations of the SEAL framework, including aspects related to catastrophic forgetting, computational overhead, and context-dependent evaluation. These are discussed in detail in the original paper.

Original Paper: https://arxiv.org/pdf/2506.10943

Project Site: https://jyopari.github.io/posts/seal

Github Repo: https://github.com/Continual-Intelligence/SEAL

The post MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI first appeared on Synced.

🔗 Sumber: syncedreview.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna