MAROKO133 Update ai: GPT-5.2 first impressions: a powerful update, especially for business

📌 MAROKO133 Breaking ai: GPT-5.2 first impressions: a powerful update, especially

OpenAI has officially released GPT-5.2, and the reactions from early testers — among whom OpenAI seeded the model several days prior to public release, in some cases weeks ago — paints a two toned picture: it is a monumental leap forward for deep, autonomous reasoning and coding, yet potentially an underwhelming "incremental" update for casual conversationalists.

Following early access periods and today's broader rollout, executives, developers, and analysts have taken to X (formerly Twitter) and company blogs to share their first testing results.

Here is a roundup of the first reactions to OpenAI’s latest flagship model.

"AI as a serious analyst"

The strongest praise for GPT-5.2 centers on its ability to handle "hard problems" that require extended thinking time.

Matt Shumer, CEO of HyperWriteAI, did not mince words in his review, calling GPT-5.2 Pro "the best model in the world."

Shumer highlighted the model's tenacity, noting that "it thinks for **over an hour** on hard problems. And it nails tasks no other model can touch."

This sentiment was echoed by Allie K. Miller, an AI entrepreneur and former AWS executive. Miller described the model as a step toward "AI as a serious analyst" rather than a "friendly companion."

"The thinking and problem-solving feel noticeably stronger," Miller wrote on X. "It gives much deeper explanations than I’m used to seeing. At one point it literally wrote code to improve its own OCR in the middle of a task."

Enterprise gains: Box reports distinct performance jumps

For the enterprise sector, the update appears to be even more significant.

Aaron Levie, CEO of Box, revealed on X that his company has been testing GPT-5.2 in early access. Levie reported that the model performs "7 points better than GPT-5.1" on their expanded reasoning tests, which approximate real-world knowledge work in financial services and life sciences.

"The model performed the majority of the tasks far faster than GPT-5.1 and GPT-5 as well," Levie noted, confirming that Box AI will be rolling out GPT-5.2 integration shortly.

Rutuja Rajwade, a Senior Product Marketing Manager at Box, expanded on this in a company blog post, citing specific latency improvements.

"Complex extraction" tasks dropped from 46 seconds on GPT-5 to just 12 seconds with GPT-5.2.

Rajwade also noted a jump in reasoning capabilities for the Media and Entertainment vertical, rising from 76% accuracy in GPT-5.1 to 81% in the new model.

A "serious leap" for coding and simulation

Developers are finding GPT-5.2 particularly potent for "one-shot" generation of complex code structures.

Pietro Schirano, CEO of magicpathai, shared a video of the model building a full 3D graphics engine in a single file with interactive controls. "It’s a serious leap forward in complex reasoning, math, coding, and simulations," Schirano posted. "The pace of progress is unreal."

Similarly, Ethan Mollick, a professor at the Wharton School of Business at the University of Pennsylvania and longtime LLM and AI power user and writer, demonstrated the model's ability to create a visually complex shader—an infinite neo-gothic city in a stormy ocean—via a single prompt.

The Agentic Era: Long-running autonomy

Perhaps the most functional shift is the model's ability to stay on task for hours without losing the thread.

Dan Shipper, CEO of thoughtful AI testing newsletter Every, reported that the model successfully performed a profit and loss (P&L) analysis that required it to work autonomously for two hours. "It did a P&L analysis where it worked for 2 hours and gave me great results," Shipper wrote.

However, Shipper also noted that for day-to-day tasks, the update feels "mostly incremental."

In an article for Every, Katie Parrott wrote that while GPT-5.2 excels at instruction following, it is "less resourceful" than competitors like Claude Opus 4.5 in certain contexts, such as deducing a user's location from email data.

The downsides: Speed and Rigidity

Despite the reasoning capabilities, the "feel" of the model has drawn critique.

Shumer highlighted a significant "speed penalty" when using the model's Thinking mode. "In my experience the Thinking mode is very slow for most questions," Shumer wrote in his deep-dive review. "I almost never use Instant."

Allie Miller also pointed out issues with the model's default behavior. "The downside is tone and format," she noted. "The default voice felt a bit more rigid, and the length/markdown behavior is extreme: a simple question turned into 58 bullets and numbered points."

The Verdict

The early reaction suggests that GPT-5.2 is a tool optimized for power users, developers, and enterprise agents rather than casual chat. As Shumer summarized in his review: "For deep research, complex reasoning, and tasks that benefit from careful thought, GPT-5.2 Pro is the best option available right now."

However, for users seeking creative writing or quick, fluid answers, models like Claude Opus 4.5 remain strong competitors. "My favorite model remains Claude Opus 4.5," Miller admitted, "but my complex ChatGPT work will get a nice incremental boost."

🔗 Sumber: venturebeat.com


📌 MAROKO133 Update ai: Google’s new framework helps AI agents spend their compute

In a new paper that studies tool-use in large language model (LLM) agents, researchers at Google and UC Santa Barbara have developed a framework that enables agents to make more efficient use of tool and compute budgets. The researchers introduce two new techniques: a simple "Budget Tracker" and a more comprehensive framework called "Budget Aware Test-time Scaling." These techniques make agents explicitly aware of their remaining reasoning and tool-use allowance.

As AI agents rely on tool calls to work in the real world, test-time scaling has become less about smarter models and more about controlling cost and latency.

For enterprise leaders and developers, budget-aware scaling techniques offer a practical path to deploying effective AI agents without facing unpredictable costs or diminishing returns on compute spend.

The challenge of scaling tool use

Traditional test-time scaling focuses on letting models "think" longer. However, for agentic tasks like web browsing, the number of tool calls directly determines the depth and breadth of exploration.

This introduces significant operational overhead for businesses. "Tool calls such as webpage browsing results in more token consumption, increases the context length and introduces additional time latency," Zifeng Wang and Tengxiao Liu, co-authors of the paper, told VentureBeat. "Tool calls themselves introduce additional API costs."

The researchers found that simply granting agents more test-time resources does not guarantee better performance. "In a deep research task, if the agent has no sense of budget, it often goes down blindly," Wang and Liu explained. "It finds one somewhat related lead, then spends 10 or 20 tool calls digging into it, only to realize that the entire path was a dead end."

Optimizing resources with Budget Tracker

To evaluate how they can optimize tool-use budgets, the researchers first tried a lightweight approach called "Budget Tracker." This module acts as a plug-in that provides the agent with a continuous signal of resource availability, enabling budget-aware tool use.

The team hypothesized that "providing explicit budget signals enables the model to internalize resource constraints and adapt its strategy without requiring additional training."

Budget Tracker operates purely at the prompt level, which makes it easy to implement. (The paper provides full details on the prompts used for Budget Tracker, which makes it easy to implement.)

In Google's implementation, the tracker provides a brief policy guideline describing the budget regimes and corresponding recommendations for using tools. At each step of the response process, Budget Tracker makes the agent explicitly aware of its resource consumption and remaining budget, enabling it to condition subsequent reasoning steps on the updated resource state.

To test this, the researchers experimented with two paradigms: sequential scaling, where the model iteratively refines its output, and parallel scaling, where multiple independent runs are conducted and aggregated. They ran experiments on search agents equipped with search and browse tools following a ReAct-style loop. ReAct (Reasoning + Acting) is a popular method where the model alternates between internal thinking and external actions. To trace a true cost-performance scaling trend, they developed a unified cost metric that jointly accounts for the costs of both internal token consumption and external tool interactions.

They tested Budget Tracker on three information-seeking QA datasets requiring external search, including BrowseComp and HLE-Search, using models such as Gemini 2.5 Pro, Gemini 2.5 Flash, and Claude Sonnet 4. The experiments show that this simple plug-in improves performance across various budget constraints.

"Adding Budget Tracker achieves comparable accuracy using 40.4% fewer search calls, 19.9% fewer browse calls, and reducing overall cost … by 31.3%," the authors told VentureBeat. Finally, Budget Tracker continued to scale as the budget increased, whereas plain ReAct plateaued after a certain threshold.

BATS: A comprehensive framework for budget-aware scaling

To further improve tool-use resource optimization, the researchers introduced Budget Aware Test-time Scaling (BATS), a framework designed to maximize agent performance under any given budget. BATS maintains a continuous signal of remaining resources and uses this information to dynamically adapt the agent's behavior as it formulates its response.

BATS uses multiple modules to orchestrate the agent's actions. A planning module adjusts stepwise effort to match the current budget, while a verification module decides whether to "dig deeper" into a promising lead or "pivot" to alternative paths based on resource availability.

Given an information-seeking question and a tool-call budget, BATS begins by using the planning module to formulate a structured action plan and decide which tools to invoke. When tools are invoked, their responses are appended to the reasoning sequence to provide the context with new evidence. When the agent proposes a candidate answer, the verification module verifies it and decides whether to continue the current sequence or initiate a new attempt with the remaining budget.

The iterative process ends when budgeted resources are exhausted, at which point an LLM-as-a-judge selects the best answer across all verified answers. Throughout the execution, the Budget Tracker continuously updates both resource usage and remaining budget at every iteration.

The researchers tested BATS on the BrowseComp, BrowseComp-ZH, and HLE-Search benchmarks against baselines including standard ReAct and various training-based agents. Their experiments show that BATS achieves higher performance while using fewer tool calls and incurring lower overall cost than competing methods. Using Gemini 2.5 Pro as the backbone, BATS achieved 24.6% accuracy on BrowseComp compared to 12.6% for standard ReAct, and 27.0% on HLE-Search compared to 20.5% for ReAct.

BATS not only improves effectiveness under budget constraints but also yields better cost–performance trade-offs. For example, on the BrowseComp dataset, BATS achieved higher accuracy at a cost of approximately 23 cents compared to a parallel scaling baseline that required over 50 cents to achieve a similar result.

According to the authors, this efficiency makes previously expensive workflows viable. "This unlocks a range of long-horizon, data-intensive enterprise applications… such as complex codebase maintenance, due-diligence investigations, competitive landscape research, compliance audits, and multi-step document analysis," they said.

As enterprises look to deploy agents that manage their own resources, the ability to balance accuracy with cost will become a critical design requirement.

"We believe the relationship between reasoning and economics will become inseparable," Wang and Liu said. "In the future, [models] must reason about value."

🔗 Sumber: venturebeat.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna