MAROKO133 Update ai: Researchers find that retraining only small parts of AI models can cu

📌 MAROKO133 Breaking ai: Researchers find that retraining only small parts of AI m

Enterprises often find that when they fine-tune models, one effective approach to making a large language model (LLM) fit for purpose and grounded in data is to have the model lose some of its abilities. After fine-tuning, some models “forget” how to perform certain tasks or other tasks they already learned. 

Research from the University of Illinois Urbana-Champaign proposes a new method for retraining models that avoids “catastrophic forgetting,” in which the model loses some of its prior knowledge. The paper focuses on two specific LLMs that generate responses from images: LLaVA and Qwen 2.5-VL.

The approach encourages enterprises to retrain only narrow parts of an LLM to avoid retraining the entire model and incurring a significant increase in compute costs. The team claims that catastrophic forgetting isn’t true memory loss, but rather a side effect of bias drift. 

“Training a new LMM can cost millions of dollars, weeks of time, and emit hundreds of tons of CO2, so finding ways to more efficiently and effectively update existing models is a pressing concern,” the team wrote in the paper. “Guided by this result, we explore tuning recipes that preserve learning while limiting output shift.”

The researchers focused on a multi-layer perceptron (MLP), the model's internal decision-making component. 

Catastrophic forgetting 

The researchers wanted first to verify the existence and the cause of catastrophic forgetting in models. 

To do this, they created a set of target tasks for the models to complete. The models were then fine-tuned and evaluated to determine whether they led to substantial forgetting. But as the process went on, the researchers found that the models were recovering some of their abilities. 

“We also noticed a surprising result, that the model performance would drop significantly in held out benchmarks after training on the counting task, it would mostly recover on PathVQA, another specialized task that is not well represented in the benchmarks,” they said. “Meanwhile, while performing the forgetting mitigation experiments, we also tried separately tuning only the self-attention projection (SA Proj) or MLP layers, motivated by the finding that tuning only the LLM was generally better than tuning the full model. This led to another very surprising result – that tuning only self-attention projection layers led to very good learning of the target tasks with no drop in performance in held out tasks, even after training all five target tasks in a sequence.”

The researchers said they believe that “what looks like forgetting or interference after fine-tuning on a narrow target task is actually bias in the output distribution due to the task distribution shift.”

Narrow retraining

That finding turned out to be the key to the experiment. The researchers noted that tuning the MLP increases the likelihood of “outputting numeric tokens and a highly correlated drop in held out task accuracy.” What it showed is that a model forgetting some of its knowledge is only temporary and not a long-term matter. 

“To avoid biasing the output distribution, we tune the MLP up/gating projections while keeping the down projection frozen, and find that it achieves similar learning to full MLP tuning with little forgetting,” the researchers said. 

This allows for a more straightforward and more reproducible method for fine-tuning a model. 

By focusing on a narrow segment of the model, rather than a wholesale retraining, enterprises can cut compute costs. It also allows better control of output drift. 

However, the research focuses only on two models, specifically those dealing with vision and language. The researchers noted that due to limited resources, they are unable to try the experiment with other models.

Their findings, however, can be extended to other LLMs, especially for different modalities. 

🔗 Sumber: venturebeat.com


📌 MAROKO133 Eksklusif ai: This new AI technique creates ‘digital twin’ consumers,

A new research paper quietly published last week outlines a breakthrough method that allows large language models (LLMs) to simulate human consumer behavior with startling accuracy, a development that could reshape the multi-billion-dollar market research industry. The technique promises to create armies of synthetic consumers who can provide not just realistic product ratings, but also the qualitative reasoning behind them, at a scale and speed currently unattainable.

For years, companies have sought to use AI for market research, but have been stymied by a fundamental flaw: when asked to provide a numerical rating on a scale of 1 to 5, LLMs produce unrealistic and poorly distributed responses. A new paper, "LLMs Reproduce Human Purchase Intent via Semantic Similarity Elicitation of Likert Ratings," submitted to the pre-print server arXiv on October 9th proposes an elegant solution that sidesteps this problem entirely.

The international team of researchers, led by Benjamin F. Maier, developed a method they call semantic similarity rating (SSR). Instead of asking an LLM for a number, SSR prompts the model for a rich, textual opinion on a product. This text is then converted into a numerical vector — an "embedding" — and its similarity is measured against a set of pre-defined reference statements. For example, a response of "I would absolutely buy this, it's exactly what I'm looking for" would be semantically closer to the reference statement for a "5" rating than to the statement for a "1."

The results are striking. Tested against a massive real-world dataset from a leading personal care corporation — comprising 57 product surveys and 9,300 human responses — the SSR method achieved 90% of human test-retest reliability. Crucially, the distribution of AI-generated ratings was statistically almost indistinguishable from the human panel. The authors state, "This framework enables scalable consumer research simulations while preserving traditional survey metrics and interpretability."

A timely solution as AI threatens survey integrity

This development arrives at a critical time, as the integrity of traditional online survey panels is increasingly under threat from AI. A 2024 analysis from the Stanford Graduate School of Business highlighted a growing problem of human survey-takers using chatbots to generate their answers. These AI-generated responses were found to be "suspiciously nice," overly verbose, and lacking the "snark" and authenticity of genuine human feedback, leading to what researchers called a "homogenization" of data that could mask serious issues like discrimination or product flaws.

Maier's research offers a starkly different approach: instead of fighting to purge contaminated data, it creates a controlled environment for generating high-fidelity synthetic data from the ground up.

"What we're seeing is a pivot from defense to offense," said one analyst not affiliated with the study. "The Stanford paper showed the chaos of uncontrolled AI polluting human datasets. This new paper shows the order and utility of controlled AI creating its own datasets. For a Chief Data Officer, this is the difference between cleaning a contaminated well and tapping into a fresh spring."

From text to intent: The technical leap behind the synthetic consumer

The technical validity of the new method hinges on the quality of the text embeddings, a concept explored in a 2022 paper in EPJ Data Science. That research argued for a rigorous "construct validity" framework to ensure that text embeddings — the numerical representations of text — truly "measure what they are supposed to." 

The success of the SSR method suggests its embeddings effectively capture the nuances of purchase intent. For this new technique to be widely adopted, enterprises will need to be confident that the underlying models are not just generating plausible text, but are mapping that text to scores in a way that is robust and meaningful.

The approach also represents a significant leap from prior research, which has largely focused on using text embeddings to analyze and predict ratings from existing online reviews. A 2022 study, for example, evaluated the performance of models like BERT and word2vec in predicting review scores on retail sites, finding that newer models like BERT performed better for general use. The new research moves beyond analyzing existing data to generating novel, predictive insights before a product even hits the market.

The dawn of the digital focus group

For technical decision-makers, the implications are profound. The ability to spin up a "digital twin" of a target consumer segment and test product concepts, ad copy, or packaging variations in a matter of hours could drastically accelerate innovation cycles. 

As the paper notes, these synthetic respondents also provide "rich qualitative feedback explaining their ratings," offering a treasure trove of data for product development that is both scalable and interpretable. While the era of human-only focus groups is far from over, this research provides the most compelling evidence yet that their synthetic counterparts are ready for business.

But the business case extends beyond speed and scale. Consider the economics: a traditional survey panel for a national product launch might cost tens of thousands of dollars and take weeks to field. An SSR-based simulation could deliver comparable insights in a fraction of the time, at a fraction of the cost, and with the ability to iterate instantly based on findings. For companies in fast-moving consumer goods categories — where the window between concept and shelf can determine market leadership — this velocity advantage could be decisive.

There are, of course, caveats. The method was validated on personal care products; its performance on complex B2B purchasing decisions, luxury goods, or culturally specific products remains unproven. And while the paper demonstrates that SSR can replicate aggregate human behavior, it does not claim to predict individual consumer choices. The technique works at the population level, not the person level — a distinction that matters greatly for applications like personalized marketing.

Yet even with these limitations, the research is a watershed. While the era of human-only focus groups is far from over, this paper provides the most compelling evidence yet that their synthetic counterparts are ready for business. The question is no longer whether AI can simulate consumer sentiment, but whether enterprises can move fast enough to capitalize on it before their competitors do.

🔗 Sumber: venturebeat.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna