MAROKO133 Hot ai: Anthropic launches Cowork, a Claude Desktop agent that works in your fil

πŸ“Œ MAROKO133 Update ai: Anthropic launches Cowork, a Claude Desktop agent that work

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users β€” and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools.

"Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers β€” Anthropic's power-user tier priced between $100 and $200 per month β€” through the macOS desktop application.

For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.

How developers using a coding tool for vacation research inspired Anthropic's latest product

The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.

According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.

"Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising β€” the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model."

Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone β€” not just developers β€” to work with Claude in the very same way."

Inside the folder-based architecture that lets Claude read, edit, and create files on your computer

Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones.

Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.

"In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes."

The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously β€” a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker."

The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks."

The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork

Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built β€” highlighting a recursive feedback loop where AI tools are being used to build better AI tools.

During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.

Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last… week and a half?!"

This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?"

The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion β€” a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not.

Connectors, browser automation, and skills extend Cowork's reach beyond the local file system

Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors β€” tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.

Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ“Œ MAROKO133 Breaking ai: New Law Would Let Grok Victims Sue Creeps Who Generated N

On Tuesday, the US senate passed a new law that would allow victims to sue individuals who use AI models like Grok to generate non-consensual nudes and other sexually explicit images.

Dubbed the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, it expands on another law passed last year, the Take It Down Act, which made it illegal to distribute nonconsensual intimate images and required social media companies to remove them within 48 hours, by empowering victims to go after the people responsible for generating the images, including seeking damages and imposing restraining orders, Bloomberg noted.

The new law was put forth by Senator Dick Durbin (D-Ill) and passed unanimously.

“Give to the victims their day in court to hold those responsible who continue to publish these images at their expense,” Durbin said in a speech on the Senate floor, via The Hill. “Today, we are one step closer to making this a reality.”

The bill comes as Elon Musk’s X is facing vociferous public backlash after his AI chatbot Grok was used to generate thousands of nudes and sexually explicit images of both adults and children whose photos had been posted to the platform. The volume of these images was so overwhelming that the AI content analysis firm Copyleaks estimated the bot was generating a nonconsensually sexualized image every single minute.

The lack of response from xAI, the Musk-owned AI startup that develops Grok, has only further catalyzed the outrage from the public and regulators alike, to say nothing of Musk’s blasΓ© attitude to it all. He only indirectly addressed the pornographic generations without ever explicitly mentioning them by asserting in a post that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” He also joked that the nonconsensual undressing “trend” was “way funnier” than the trends started by other AI chatbots.

If Musk has failed to comprehend the gravity of the situation, governments have not. Some countries, including Malaysia and Indonesia, have moved to ban access to his website entirely. UK prime minister Keir Starmer warned that he would bring the hammer down on X while the country’s communications regulator, Ofcom, launched an official investigation into the company.

“Imagine losing control of your own likeness or identity,” said Durbin, per The Hill. “Imagine that happening to you when you were in high school. Imagine how powerless victims feel when they cannot remove illicit content, cannot prevent it from being reproduced repeatedly and cannot prevent new images from being created.”

“The consequences can be profound,” he added.

The DEFIANCE Act now needs to pass a vote in the House before it can officially become law. It had already passed a vote in the Senate when it was previously proposed in 2024, but didn’t pass the lower chamber. Now, with the outrage over Grok, it may stand a better chance.

More on AI: Opposition to Elon Musk’s AI Stripping Clothing Off Children Is Nearly Universal, Polling Shows

The post New Law Would Let Grok Victims Sue Creeps Who Generated Nonconsensual Nudes appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna