MAROKO133 Eksklusif ai: Claude Code costs up to $200 a month. Goose does the same thing fo

πŸ“Œ MAROKO133 Update ai: Claude Code costs up to $200 a month. Goose does the same t

The artificial intelligence coding revolution comes with a catch: it's expensive.

Claude Code, Anthropic's terminal-based AI agent that can write, debug, and deploy code autonomously, has captured the imagination of software developers worldwide. But its pricing β€” ranging from $20 to $200 per month depending on usage β€” has sparked a growing rebellion among the very programmers it aims to serve.

Now, a free alternative is gaining traction. Goose, an open-source AI agent developed by Block (the financial technology company formerly known as Square), offers nearly identical functionality to Claude Code but runs entirely on a user's local machine. No subscription fees. No cloud dependency. No rate limits that reset every five hours.

"Your data stays with you, period," said Parth Sareen, a software engineer who demonstrated the tool during a recent livestream. The comment captures the core appeal: Goose gives developers complete control over their AI-powered workflow, including the ability to work offline β€” even on an airplane.

The project has exploded in popularity. Goose now boasts more than 26,100 stars on GitHub, the code-sharing platform, with 362 contributors and 102 releases since its launch. The latest version, 1.20.1, shipped on January 19, 2026, reflecting a development pace that rivals commercial products.

For developers frustrated by Claude Code's pricing structure and usage caps, Goose represents something increasingly rare in the AI industry: a genuinely free, no-strings-attached option for serious work.

Anthropic's new rate limits spark a developer revolt

To understand why Goose matters, you need to understand the Claude Code pricing controversy.

Anthropic, the San Francisco artificial intelligence company founded by former OpenAI executives, offers Claude Code as part of its subscription tiers. The free plan provides no access whatsoever. The Pro plan, at $17 per month with annual billing (or $20 monthly), limits users to just 10 to 40 prompts every five hours β€” a constraint that serious developers exhaust within minutes of intensive work.

The Max plans, at $100 and $200 per month, offer more headroom: 50 to 200 prompts and 200 to 800 prompts respectively, plus access to Anthropic's most powerful model, Claude 4.5 Opus. But even these premium tiers come with restrictions that have inflamed the developer community.

In late July, Anthropic announced new weekly rate limits. Under the system, Pro users receive 40 to 80 hours of Sonnet 4 usage per week. Max users at the $200 tier get 240 to 480 hours of Sonnet 4, plus 24 to 40 hours of Opus 4. Nearly five months later, the frustration has not subsided.

The problem? Those "hours" are not actual hours. They represent token-based limits that vary wildly depending on codebase size, conversation length, and the complexity of the code being processed. Independent analysis suggests the actual per-session limits translate to roughly 44,000 tokens for Pro users and 220,000 tokens for the $200 Max plan.

"It's confusing and vague," one developer wrote in a widely shared analysis. "When they say '24-40 hours of Opus 4,' that doesn't really tell you anything useful about what you're actually getting."

The backlash on Reddit and developer forums has been fierce. Some users report hitting their daily limits within 30 minutes of intensive coding. Others have canceled their subscriptions entirely, calling the new restrictions "a joke" and "unusable for real work."

Anthropic has defended the changes, stating that the limits affect fewer than five percent of users and target people running Claude Code "continuously in the background, 24/7." But the company has not clarified whether that figure refers to five percent of Max subscribers or five percent of all users β€” a distinction that matters enormously.

How Block built a free AI coding agent that works offline

Goose takes a radically different approach to the same problem.

Built by Block, the payments company led by Jack Dorsey, Goose is what engineers call an "on-machine AI agent." Unlike Claude Code, which sends your queries to Anthropic's servers for processing, Goose can run entirely on your local computer using open-source language models that you download and control yourself.

The project's documentation describes it as going "beyond code suggestions" to "install, execute, edit, and test with any LLM." That last phrase β€” "any LLM" β€” is the key differentiator. Goose is model-agnostic by design.

You can connect Goose to Anthropic's Claude models if you have API access. You can use OpenAI's GPT-5 or Google's Gemini. You can route it through services like Groq or OpenRouter. Or β€” and this is where things get interesting β€” you can run it entirely locally using tools like Ollama, which let you download and execute open-source models on your own hardware.

The practical implications are significant. With a local setup, there are no subscription fees, no usage caps, no rate limits, and no concerns about your code being sent to external servers. Your conversations with the AI never leave your machine.

"I use Ollama all the time on planes β€” it's a lot of fun!" Sareen noted during a demonstration, highlighting how local models free developers from the constraints of internet connectivity.

What Goose can do that traditional code assistants can't

Goose operates as a command-line tool or desktop application that can autonomously perform complex development tasks. It can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows across multiple files, and interact with external APIs β€” all without constant human oversight.

The architecture relies on what the AI industry calls "tool calling" or "<a href="https://platform.openai…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ“Œ MAROKO133 Breaking ai: New York Times Issues Stern Warning to Its Freelance Writ

After a string of AI controversies, The New York Times emailed a “periodic reminder” to freelancers on Tuesday reminding them of the paper’s AI policy.

“To be clear on AI: All writing and visuals that freelancers submit to The Times must be the product of human creativity and craft, and all submissions must consist solely of their original reporting, writing and other work,” reads the email, reviewed by Futurism. “Freelance contributors must not submit any material for publication that contains content generated, modified or enhanced by [generative AI] tools, or that has been input into these tools.”

The email pointed its contributors to a detailed document on its “policy on freelancers’ use of generative AI tools,” which forbids the inclusion of AI-generated or AI-modified text and images in any reporting contributed to the paper. While AI tools are acceptable for “high-level” brainstorming, the notice warns, freelancers “may not use [generative AI] tools to help you write any part of a story.”

“Using [generative AI] tools to create, draft, guide, clean up, edit, improve, or rephrase your writing is strictly prohibited,” it continues. As for what specific tools the company’s actually speaking to, the document forbids “chatbots like Gemini, Claude, ChatGPT and Perplexity; AI-powered search products like Google AI Overviews; and image generators like Adobe Firefly, DALL-E and MidJourney.”

The reminder comes as the paper of record continues to grapple with AI-generated content, including preventable AI-spun errors, making its way into its pages. Back in March, the NYT faced scrutiny after a contributor to its competitive “Modern Love” column was publicly accused of using AI to generate an emotional personal essay; that writer later told Futurism that she’d used chatbots to conceptualize and edit the piece. Then, in April, the paper cut ties with a freelancer who admitted to using AI to cook up a book review that was found to be riddled with plagiarism after its publication.

And while these controversies indeed stemmed from the work of freelancers, the institution found itself in hot water yet again last week, when a substantial correction revealed that an article bylined by the NYT’s Canada Bureau chief contained an AI-fabricated quote weeks after publication. (As Futurism reported in March, a writer at CondΓ© Nast’s Ars Technica was fired for a similar error.)

“An article on April 15 about the success that Mark Carney, the Liberal prime minister of Canada, has had in building cross-party alliances was updated after The Times learned that a remark attributed to Pierre Poilievre, the Conservative leader, was in fact an AI-generated summary of his views about Canadian politics that AI rendered as a quotation,” reads the update. “The reporter should have checked the accuracy of what the AI tool returned.”

Futurism reached out to the NYT to ask whether this kind of reminder is normal, and whether the notice has anything to do with its recent flurry of AI scandals. In response, the paper shared a statement saying that “we regularly provide updated guidance to freelancers and in this case we wanted to be clear about our policies regarding the use of AI.”

“In-house journalists have separate guidelines for using AI and approved GenAI tools,” the paper added.

Updated with a statement from The New York Times.

More on the New York Times: We Talked to a Writer Accused of Publishing An AI-Generated Essay in The New York Times

The post New York Times Issues Stern Warning to Its Freelance Writers About AI Use appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna