MAROKO133 Eksklusif ai: Google Says People Are Copying Its AI Without Its Permission, Much

πŸ“Œ MAROKO133 Hot ai: Google Says People Are Copying Its AI Without Its Permission,

Google has relied on a tremendous amount of material without permission to train its Gemini AI models. The company, alongside many of its competitors in the AI space, has been indiscriminately scraping the internet for content, without compensating rightsholders, racking up many copyright infringement lawsuits along the way.

But when it comes to its own tech being copied, Google has no problem pointing fingers. This week, the company accused “commercially motivated” actors of trying to clone its Gemini AI.

In a Thursday report, Google complained it had become under “distillation attacks,” with agents querying Gemini up to 100,000 times to “extract” the underlying model β€” the convoluted AI industry equivalent of copying somebody’s homework, basically.

Google called the attacks a “method of intellectual property theft that violates Google’s terms of service” β€” which, let’s face it, is a glaring double standard given its callous approach to scraping other IP without remuneration.

Google remained vague on who it identified as the culprits, beyond pointing out “private sector entities” and “researchers seeking to clone proprietary logic.”

The stakes are high, as companies continue to pour tens of billions of dollars into AI infrastructure to make models more powerful. It’s no wonder Google is scared to lose its competitive edge as offerings start to converge at the head of the pack. The output of one pioneering model has become almost indistinguishable from another, forcing companies to try to differentiate their products.

It’s far from the first time the subject of model distillation has caused drama. Chinese startup DeepSeek rattled Silicon Valley to its core in early 2025 after showing off a far cheaper and more efficient AI model. At the time, OpenAI suggested DeepSeek may have broken its terms of service by distilling its AI models.

The ChatGPT maker quickly became the subject of widespread mockery following the comments, with netizens accusing the company of hypocrisy, pointing out that OpenAI itself had indiscriminately ripped off other people’s work for many years.

Google’s latest troubles likely won’t be the last time we hear about smaller actors trying to extract mainstream AI models through distillation.

Google’s Threat Intelligence Group chief analyst John Hultquist told NBC News that “we’re going to be the canary in the coal mine for far more incidents.”

But whether they’ll be able to defend themselves in the coming months and years remains uncertain. AI companies remain significantly exposed since their models are available for public use.

“Historically, adversaries seeking to steal high-tech capabilities used conventional computer-enabled intrusion operations to compromise organizations and steal data containing trade secrets,” Google’s report reads. “For many AI technologies where LLMs are offered as services, this approach is no longer required; actors can use legitimate API access to attempt to ‘clone’ select AI model capabilities.”

Google outlined one case study, after finding that attackers were using “over 100,000 prompts,” suggesting an “attempt to replicate Gemini’s reasoning ability in non-English target languages across a wide variety of tasks.”

However, the company’s systems “recognized this attack in real time and lowered the risk of this particular attack.”

It’s a particularly vulnerable point in time as AI companies are desperately trying to find a way of monetizing the tech through a variety of revenue drivers, from pricey subscription models to ads. With far lower upfront costs, it’s entirely possible that much smaller entities could break through, not unlike what we saw with DeepSeek in early 2025.

More on distillation: AI Companies Tremble as They Realize It’s Easy for Competitors to Steal Their Super-Expensive Work for Pennies on the Dollar

The post Google Says People Are Copying Its AI Without Its Permission, Much Like It Scraped Everybody’s Data Without Asking to Create Its AI in the First Place appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ“Œ MAROKO133 Breaking ai: Anthropic launches Cowork, a Claude Desktop agent that wo

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users β€” and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools.

"Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers β€” Anthropic's power-user tier priced between $100 and $200 per month β€” through the macOS desktop application.

For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.

How developers using a coding tool for vacation research inspired Anthropic's latest product

The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.

According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.

"Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising β€” the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model."

Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone β€” not just developers β€” to work with Claude in the very same way."

Inside the folder-based architecture that lets Claude read, edit, and create files on your computer

Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones.

Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.

"In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes."

The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously β€” a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker."

The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks."

The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork

Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built β€” highlighting a recursive feedback loop where AI tools are being used to build better AI tools.

During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.

Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last… week and a half?!"

This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?"

The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion β€” a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not.

Connectors, browser automation, and skills extend Cowork's reach beyond the local file system

Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors β€” tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.

Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna