📌 MAROKO133 Hot ai: Harvard Astronomer Says Mysterious Interstellar Object May Be
Mysterious interstellar object 3I/ATLAS has reemerged from behind the Sun, allowing astronomers to once again get a glimpse at the rare visitor.
The object, which is generally believed by experts to be a comet that’s predominantly made up of carbon dioxide ice, is continuing on its highly eccentric trajectory, and is expected to make its closest pass of the Earth just days before Christmas on its way back out of our star system.
And judging by the latest data, 3I/ATLAS has survived its perihelion — or its closest approach to the Sun — largely intact, instead of breaking apart, as Harvard astronomer Avi Loeb had hypothesized in a blog post earlier this week.
New images of 3I/ATLAS, taken by the Nordic Optical Telescope on the Canary Islands, “show a single body, with no evidence for breakup following the perihelion passage two weeks earlier,” Loeb conceded in a Wednesday followup post.
The images also show 3I/ATLAS’ prominent “anti-tail,” an accumulation of jets that points towards the Sun, suspected to be made up of larger dust particles less affected by the Sun’s radiation pressure.
However, to Loeb that’s just one out of two possible scenarios. These jets could also be evidence of “thrusters on a technological spacecraft,” as he told NBC News on Monday.
If so, Loeb suggests that if 3I/ATLAS really is a visitor from a technological civilization — a possibility he’s floated repeatedly — then it may be trying to boost its exit from the solar system to a breakneck pace. (Let’s face it: getting away from Earth as rapidly as possible makes perfect sense these days.)
“Technological thrusters which point their exhaust towards the Sun would accelerate away from the Sun,” Loeb noted in his latest blog post. “This post-perihelion maneuver might be employed by a spacecraft that aims to gain speed rather than slow down through the gravitational assist from the Sun.”
It’s only one of several “anomalies” Loeb has catalogued to support his theory that 3I/ATLAS could be some sort of alien spacecraft visiting the solar system. Loeb had already discussed the object’s “anti-tail” in early September. The appendage was first made apparent in August images taken by NASA’s Hubble Space Telescope, but has grown in length since then.
Of course, most of his peers think it’s just a natural comet. Loeb’s far-fetched theory has led to plenty of skepticism from within the scientific community.
In a September 29 blog post, Pennsylvania State University astronomer Jason Wright refuted Loeb’s claim that 3I/ATLAS’ anti-tail was unique and could be alien technology, pointing out several previous observations of “similar sunward enhancement” caused by large, ejected dust grains that “don’t get swept up by the solar wind on the Sun-facing side of a comet.”
Wright also pointed to a 1974 paper that discussed the “anomalous tail of Comet Kohoutek,” an object that was first discovered the year prior, in 1973.
But Loeb isn’t ready to give up hope that we could be looking at an alien spacecraft.
According to his calculations, 3I/ATLAS could be far more enormous than previously thought, based on the huge amount of mass it’s shedding, with a surface area equivalent of a sphere with a diameter of 14.3 miles. That’s four times as large as his previous estimates.
“Alien-tech thrusters might employ yet higher exhaust speeds, reducing the required mass loss by several orders of magnitude and making the required fuel a small fraction of the spacecraft mass,” he determined in a previous blog post.
To Loeb, it’s a matter of keeping an open mind, even in light of overwhelming evidence. Besides, if 3I/ATLAS were to be an alien mothership, there’s no saying what kind of risks it could pose to humanity.
“The foundation of science is the curiosity, the humility to learn,” he told NBC News. “Let’s just wait a few more weeks, we’ll figure it out, and let’s hope that there will be no gifts from this object for the holidays on Earth.”
More on 3I/ATLAS: The Mysterious Interstellar Object May Have Just Exploded
The post Harvard Astronomer Says Mysterious Interstellar Object May Be Blasting Its Thrusters to Get Away From Us as Fast as Possible appeared first on Futurism.
đź”— Sumber: futurism.com
📌 MAROKO133 Breaking ai: OpenAI experiment finds that sparse models could give AI
OpenAI researchers are experimenting with a new approach to designing neural networks, with the aim of making AI models easier to understand, debug, and govern. Sparse models can provide enterprises with a better understanding of how these models make decisions.Â
Understanding how models choose to respond, a big selling point of reasoning models for enterprises, can provide a level of trust for organizations when they turn to AI models for insights.Â
The method called for OpenAI scientists and researchers to look at and evaluate models not by analyzing post-training performance, but by adding interpretability or understanding through sparse circuits.
OpenAI notes that much of the opacity of AI models stems from how most models are designed, so to gain a better understanding of model behavior, they must create workarounds.Â
“Neural networks power today’s most capable AI systems, but they remain difficult to understand,” OpenAI wrote in a blog post. “We don’t write these models with explicit step-by-step instructions. Instead, they learn by adjusting billions of internal connections or weights until they master a task. We design the rules of training, but not the specific behaviors that emerge, and the result is a dense web of connections that no human can easily decipher.”
To enhance the interpretability of the mix, OpenAI examined an architecture that trains untangled neural networks, making them simpler to understand. The team trained language models with a similar architecture to existing models, such as GPT-2, using the same training schema.Â
The result: improved interpretability.Â
The path toward interpretability
Understanding how models work, giving us insight into how they're making their determinations, is important because these have a real-world impact, OpenAI says. Â
The company defines interpretability as “methods that help us understand why a model produced a given output.” There are several ways to achieve interpretability: chain-of-thought interpretability, which reasoning models often leverage, and mechanistic interpretability, which involves reverse-engineering a model’s mathematical structure.
OpenAI focused on improving mechanistic interpretability, which it said “has so far been less immediately useful, but in principle, could offer a more complete explanation of the model’s behavior.”
“By seeking to explain model behavior at the most granular level, mechanistic interpretability can make fewer assumptions and give us more confidence. But the path from low-level details to explanations of complex behaviors is much longer and more difficult,” according to OpenAI.Â
Better interpretability allows for better oversight and gives early warning signs if the model’s behavior no longer aligns with policy.Â
OpenAI noted that improving mechanistic interpretability “is a very ambitious bet,” but research on sparse networks has improved this.Â
How to untangle a modelÂ
To untangle the mess of connections a model makes, OpenAI first cut most of these connections. Since transformer models like GPT-2 have thousands of connections, the team had to “zero out” these circuits. Each will only talk to a select number, so the connections become more orderly.
Next, the team ran “circuit tracing” on tasks to create groupings of interpretable circuits. The last task involved pruning the model “to obtain the smallest circuit which achieves a target loss on the target distribution,” according to OpenAI. It targeted a loss of 0.15 to isolate the exact nodes and weights responsible for behaviors.Â
“We show that pruning our weight-sparse models yields roughly 16-fold smaller circuits on our tasks than pruning dense models of comparable pretraining loss. We are also able to construct arbitrarily accurate circuits at the cost of more edges. This shows that circuits for simple behaviors are substantially more disentangled and localizable in weight-sparse models than dense models,” the report said.Â
Small models become easier to train
Although OpenAI managed to create sparse models that are easier to understand, these remain significantly smaller than most foundation models used by enterprises. Enterprises increasingly use small models, but frontier models, such as its flagship GPT-5.1, will still benefit from improved interpretability down the line.Â
Other model developers also aim to understand how their AI models think. Anthropic, which has been researching interpretability for some time, recently revealed that it had “hacked” Claude’s brain — and Claude noticed. Meta also is working to find out how reasoning models make their decisions.Â
As more enterprises turn to AI models to help make consequential decisions for their business, and eventually customers, research into understanding how models think would give the clarity many organizations need to trust models more.Â
đź”— Sumber: venturebeat.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!