MAROKO133 Hot ai: Railway secures $100 million to challenge AWS with AI-native cloud infra

๐Ÿ“Œ MAROKO133 Hot ai: Railway secures $100 million to challenge AWS with AI-native c

Railway, a San Francisco-based cloud platform that has quietly amassed two million developers without spending a dollar on marketing, announced Thursday that it raised $100 million in a Series B funding round, as surging demand for artificial intelligence applications exposes the limitations of legacy cloud infrastructure.

TQ Ventures led the round, with participation from FPV Ventures, Redpoint, and Unusual Ventures. The investment values Railway as one of the most significant infrastructure startups to emerge during the AI boom, capitalizing on developer frustration with the complexity and cost of traditional platforms like Amazon Web Services and Google Cloud.

"As AI models get better at writing code, more and more people are asking the age-old question: where, and how, do I run my applications?" said Jake Cooper, Railway's 28-year-old founder and chief executive, in an exclusive interview with VentureBeat. "The last generation of cloud primitives were slow and outdated, and now with AI moving everything faster, teams simply can't keep up."

The funding is a dramatic acceleration for a company that has charted an unconventional path through the cloud computing industry. Railway raised just $24 million in total before this round, including a $20 million Series A from Redpoint in 2022. The company now processes more than 10 million deployments monthly and handles over one trillion requests through its edge network โ€” metrics that rival far larger and better-funded competitors.

Why three-minute deploy times have become unacceptable in the age of AI coding assistants

Railway's pitch rests on a simple observation: the tools developers use to deploy and manage software were designed for a slower era. A standard build-and-deploy cycle using Terraform, the industry-standard infrastructure tool, takes two to three minutes. That delay, once tolerable, has become a critical bottleneck as AI coding assistants like Claude, ChatGPT, and Cursor can generate working code in seconds.

"When godly intelligence is on tap and can solve any problem in three seconds, those amalgamations of systems become bottlenecks," Cooper told VentureBeat. "What was really cool for humans to deploy in 10 seconds or less is now table stakes for agents."

The company claims its platform delivers deployments in under one second โ€” fast enough to keep pace with AI-generated code. Customers report a tenfold increase in developer velocity and up to 65 percent cost savings compared to traditional cloud providers.

These numbers come directly from enterprise clients, not internal benchmarks. Daniel Lobaton, chief technology officer at G2X, a platform serving 100,000 federal contractors, measured deployment speed improvements of seven times faster and an 87 percent cost reduction after migrating to Railway. His infrastructure bill dropped from $15,000 per month to approximately $1,000.

"The work that used to take me a week on our previous infrastructure, I can do in Railway in like a day," Lobaton said. "If I want to spin up a new service and test different architectures, it would take so long on our old setup. In Railway I can launch six services in two minutes."

Inside the controversial decision to abandon Google Cloud and build data centers from scratch

What distinguishes Railway from competitors like Render and Fly.io is the depth of its vertical integration. In 2024, the company made the unusual decision to abandon Google Cloud entirely and build its own data centers, a move that echoes the famous Alan Kay maxim: "People who are really serious about software should make their own hardware."

"We wanted to design hardware in a way where we could build a differentiated experience," Cooper said. "Having full control over the network, compute, and storage layers lets us do really fast build and deploy loops, the kind that allows us to move at 'agentic speed' while staying 100 percent the smoothest ride in town."

The approach paid dividends during recent widespread outages that affected major cloud providers โ€” Railway remained online throughout.

This soup-to-nuts control enables pricing that undercuts the hyperscalers by roughly 50 percent and newer cloud startups by three to four times. Railway charges by the second for actual compute usage: $0.00000386 per gigabyte-second of memory, $0.00000772 per vCPU-second, and $0.00000006 per gigabyte-second of storage. There are no charges for idle virtual machines โ€” a stark contrast to the traditional cloud model where customers pay for provisioned capacity whether they use it or not.

"The conventional wisdom is that the big guys have economies of scale to offer better pricing," Cooper noted. "But when they're charging for VMs that usually sit idle in the cloud, and we've purpose-built everything to fit much more density on these machines, you have a big opportunity."

How 30 employees built a platform generating tens of millions in annual revenue

Railway has achieved its scale with a team of just 30 employees generating tens of millions in annual revenue โ€” a ratio of revenue per employee that would be exceptional even for established software companies. The company grew revenue 3.5 times last year and continues to expand at 15 percent month-over-month.

Cooper emphasized that the fundraise was strategic rather than necessary. "We're default alive; there's no reason for us to raise money," he said. "We raised because we see a massive opportunity to accelerate, not because we needed to survive."

The company hired its first salesperson only last year and employs just two solutions engineers. Nearly all of Railway's two million users discovered the platform through word of mouth โ€” developers telling other developers about a tool that actually works.

"We basically did the standard engineering thing: if you build it, they will come," Cooper recalled. "And to some degree, they came."

From side projects to Fortune 500 deployments: Railway's unlikely corporate expansion

Despite its grassroots developer community, Railway has made significant inroads into large organizations. The company claims that 31 percent of Fortune 500 companies now use its platform, though deployments range from company-wide infrastructure to individual team projects.

Notable customers include Bilt, the loyalty program company; Intuit's GoCo subsidiary; TripAdvisor's Cruise Critic; and MGM Resorts. Kernel, a Y Combinator-backed startup providing AI infrastructure to over 1,000 companies, runs its entire customer-facing system on Railway for $444 per month.

"At my previous company Clever, which sold …

Konten dipersingkat otomatis.

๐Ÿ”— Sumber: venturebeat.com


๐Ÿ“Œ MAROKO133 Eksklusif ai: Thereโ€™s a Blinking Warning Sign for the Data Centers in

It’s plain to see that Elon Musk’s ambition of putting data centers in space is a daring and risky undertaking. 

Further underscoring the challenges, experts tell Reuters that a previous failed attempt at takingย data centers off solid ground has alarming parallels that could spell doom for Musk’s plan for SpaceX.

In 2015, Microsoft deployed “Project Natick,” a cutting edge underwater data center off the coast of Scotland. Resembling the size and shape of a semi truck’s fuel tanker, it was designed to use seawater to cool itself and be largely self-sufficient once anchored to the seabed. The idea was full of promise: cooling a data center is one of its most costly aspects; now it was accomplishing it for free. It was also supported by wind power, providing an aspect of sustainability.

Flash forward to the present, however, and the data centers that are popping up everywhere are amid the AI boom are most decidedly not being built in the ocean. Sources told Reuters that the project was figuratively sunk by lack of client demand and unviable economics for reasons that could also plague Musk’s orbital facilities.

“These problems are likely to be more โ€‹severe in space than under the sea,” Roy Chua, founder of industry research firm AvidThink, told Reuters

Critically, both projects rely on modular units that are expensive to deploy, and once operational, can’t be upgraded or even repaired. Potential customers favored sticking to terrestrial facilities because they could be brought online quicker and be upgraded with the latest hardware โ€” a more crucial capability than ever, because AI chips are constantly improving.

Once, or if, Musk deploys his orbital data centers, they’ll be “locked-for-life.” A new generation of AI hardware โ€” perhaps one optimized for another type of AI architecture that becomes the cutting edge, as many in the industry believe large language models are an eventual dead end โ€” could obviate Musk’s expensive satellites.

Experts have also been incredulous at Musk’s proposed size for each of these data center satellites, which according to company graphics will dwarf the International Space Station.

All that’s before we even begin to look at the exorbitant costs of getting these data centers into space at scale. Reminder: Musk wants to deploy one million of the satellites. Ars Technica editor Eric Berger estimated the barebones cost of doing that to be at least $1 trillion. Analysts at equity research group Moffett Nathanson, in a note cited by Reuters, said the cost would be trillions-plural.

But what are mere trillions in this day and age? If SpaceX somehow gets the money for all this โ€” perhaps with a little help from its forthcoming IPO โ€” it would quickly be overwhelmed by the sheer number of space launches needed to pull this off. According to Moffett Nathanson’s estimates, SpaceX would have to launch its Starship rocket 3,000 times per year, or eight times per day. (Last year, the company launched 167 rockets total.)

Starship is designed to be reusable and carry far more massive payloads to orbit than existing rockets, making it the most cost-efficient vehicle for the job โ€” in theory. It’s years behind schedule and has exploded inย many of its 11 flight tests, none of which have reached Earth’s orbit yet.

If space data centers have a future, it’ll be as a niche complement to conventional ones, perhaps for military applications or providing computing power to space stations. That’s nice, but a far cry from Musk’s promise that space data centers will be the future.

“I strongly believe that there’ll be no โ€‹way in the foreseeable future that โ spaceโ€‘based data centers can replace ground data centers,” Rousseau, a research director at consulting firm Analysys Mason, told Reuters.

More on data centers: OpenAI’s Obsession With Data Centers Is Running Into Trouble

The post There’s a Blinking Warning Sign for the Data Centers in Space Industry appeared first on Futurism.

๐Ÿ”— Sumber: futurism.com


๐Ÿค– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

โœ… Update berikutnya dalam 30 menit โ€” tema random menanti!

Author: timuna