MAROKO133 Breaking ai: Quantum computers still struggle with chemistry’s hardest molecular

📌 MAROKO133 Eksklusif ai: Quantum computers still struggle with chemistry’s hardes

One of the biggest promises of quantum computing is the ability to simulate molecules with unprecedented accuracy. If quantum computers could do this efficiently, this may speed up the discovery of new medicines, batteries, and fertilizers.

However, a new theoretical study suggests that the road to this promise is much longer than many researchers had hoped. 

Through their analysis, the study authors show that even the most popular quantum algorithms face serious obstacles when trying to compute the lowest energy state of molecules—a fundamental quantity needed to understand chemical reactions.

“Quantum chemistry is envisioned as an early and disruptive application for quantum computers. Yet, closer scrutiny of the proposed algorithms shows that there are considerable difficulties along the way,” the study authors note.

Fragile algorithms meet noisy quantum hardware

The researchers examined what it would take for quantum computers to achieve a real advantage in molecular simulations

Specifically, they focused on the problem of finding a molecule’s ground state energy, which is basically the lowest possible energy configuration of its electrons. Knowing this value helps scientists predict chemical stability and reaction pathways.

To test the feasibility of quantum methods, the team analyzed two major algorithms used in quantum chemistry calculations: the variational quantum eigensolver (VQE) and quantum phase estimation (QPE). Each algorithm targets a different generation of quantum hardware and comes with its own strengths and weaknesses.

The first method, VQE, is designed for near-term quantum computers that are still noisy and prone to errors. 

VQE works through a hybrid approach where a quantum computer prepares a candidate quantum state for the molecule, while a classical computer adjusts parameters step by step to minimize the calculated energy. The idea is to gradually approach the molecule’s true ground state.

However, “We find that decoherence is highly detrimental to the accuracy of VQE and performing relevant chemistry calculations would require performances that are expected for fault-tolerant quantum computers, not mere noisy hardware, even with advanced error mitigation techniques,” the researchers said.

The problem becomes even more severe as molecules grow larger. For instance, when the researchers examined the chromium dimer molecule (Cr₂), they found that just one iteration of the VQE calculation could take about 25 days. When the many required optimization steps are included, the total runtime could stretch to around 24 years.

Another challenge appears when dealing with strongly correlated molecules, where electrons interact in complex ways. Such systems often include transition metals and are considered important targets for quantum computing because they are difficult for classical simulations. 

However, the study shows that VQE frequently struggles to handle them accurately.

The problem with quantum phase estimation

The second algorithm analyzed, QPE, is designed for future fault-tolerant quantum computers that can correct their own errors. In theory, QPE can determine energy levels with extremely high precision. However, it has a different challenge.

QPE requires an initial quantum state that closely resembles the molecule’s true ground state. If the starting guess is poor, the probability of obtaining the correct answer becomes very small.

The researchers found that this problem becomes dramatically worse as molecules grow larger due to a phenomenon known as the orthogonality catastrophe. As the number of particles increases, the overlap between the prepared input state and the true ground state shrinks exponentially. 

Since QPE’s success depends on this overlap, the probability of correctly measuring the ground state energy also drops exponentially with system size. 

To resolve this issue, the team developed a criterion that estimates the overlap between the prepared input state and the true ground state using the energy and energy variance of the initial state. 

By applying this framework to input states produced by advanced classical chemistry methods, they showed that the overlap consistently decreases exponentially as molecular size increases.

This means that even if a fault-tolerant quantum computer becomes available, QPE could still struggle with large molecules because the chance of successfully extracting the correct energy becomes vanishingly small.

Classical still beats quantum

These results highlight that classical computational chemistry methods remain surprisingly competitive, and in some cases could outperform quantum approaches, even with perfect quantum hardware.

“These observations may also suggest that ground state estimation in chemistry may not be the most appropriate target for quantum computers. Besides the issues of quantum processors, we outlined in this paper, this statement is also due to the comparatively good quality of classical state preparation methods,” the study authors explained.

However, the study does not rule out progress. Advances in fault-tolerant quantum hardware, improved state-preparation methods, and more efficient algorithms could eventually overcome some of the barriers identified. 

For now, however, the research serves as a reality check. Achieving a true quantum advantage in chemistry will likely require breakthroughs in both hardware and algorithms before quantum computers can surpass classical machines on these fundamental molecular calculations.

The study is published in the journal Physical Review B.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Breaking ai: Railway secures $100 million to challenge AWS with AI-nat

Railway, a San Francisco-based cloud platform that has quietly amassed two million developers without spending a dollar on marketing, announced Thursday that it raised $100 million in a Series B funding round, as surging demand for artificial intelligence applications exposes the limitations of legacy cloud infrastructure.

TQ Ventures led the round, with participation from FPV Ventures, Redpoint, and Unusual Ventures. The investment values Railway as one of the most significant infrastructure startups to emerge during the AI boom, capitalizing on developer frustration with the complexity and cost of traditional platforms like Amazon Web Services and Google Cloud.

"As AI models get better at writing code, more and more people are asking the age-old question: where, and how, do I run my applications?" said Jake Cooper, Railway's 28-year-old founder and chief executive, in an exclusive interview with VentureBeat. "The last generation of cloud primitives were slow and outdated, and now with AI moving everything faster, teams simply can't keep up."

The funding is a dramatic acceleration for a company that has charted an unconventional path through the cloud computing industry. Railway raised just $24 million in total before this round, including a $20 million Series A from Redpoint in 2022. The company now processes more than 10 million deployments monthly and handles over one trillion requests through its edge network — metrics that rival far larger and better-funded competitors.

Why three-minute deploy times have become unacceptable in the age of AI coding assistants

Railway's pitch rests on a simple observation: the tools developers use to deploy and manage software were designed for a slower era. A standard build-and-deploy cycle using Terraform, the industry-standard infrastructure tool, takes two to three minutes. That delay, once tolerable, has become a critical bottleneck as AI coding assistants like Claude, ChatGPT, and Cursor can generate working code in seconds.

"When godly intelligence is on tap and can solve any problem in three seconds, those amalgamations of systems become bottlenecks," Cooper told VentureBeat. "What was really cool for humans to deploy in 10 seconds or less is now table stakes for agents."

The company claims its platform delivers deployments in under one second — fast enough to keep pace with AI-generated code. Customers report a tenfold increase in developer velocity and up to 65 percent cost savings compared to traditional cloud providers.

These numbers come directly from enterprise clients, not internal benchmarks. Daniel Lobaton, chief technology officer at G2X, a platform serving 100,000 federal contractors, measured deployment speed improvements of seven times faster and an 87 percent cost reduction after migrating to Railway. His infrastructure bill dropped from $15,000 per month to approximately $1,000.

"The work that used to take me a week on our previous infrastructure, I can do in Railway in like a day," Lobaton said. "If I want to spin up a new service and test different architectures, it would take so long on our old setup. In Railway I can launch six services in two minutes."

Inside the controversial decision to abandon Google Cloud and build data centers from scratch

What distinguishes Railway from competitors like Render and Fly.io is the depth of its vertical integration. In 2024, the company made the unusual decision to abandon Google Cloud entirely and build its own data centers, a move that echoes the famous Alan Kay maxim: "People who are really serious about software should make their own hardware."

"We wanted to design hardware in a way where we could build a differentiated experience," Cooper said. "Having full control over the network, compute, and storage layers lets us do really fast build and deploy loops, the kind that allows us to move at 'agentic speed' while staying 100 percent the smoothest ride in town."

The approach paid dividends during recent widespread outages that affected major cloud providers — Railway remained online throughout.

This soup-to-nuts control enables pricing that undercuts the hyperscalers by roughly 50 percent and newer cloud startups by three to four times. Railway charges by the second for actual compute usage: $0.00000386 per gigabyte-second of memory, $0.00000772 per vCPU-second, and $0.00000006 per gigabyte-second of storage. There are no charges for idle virtual machines — a stark contrast to the traditional cloud model where customers pay for provisioned capacity whether they use it or not.

"The conventional wisdom is that the big guys have economies of scale to offer better pricing," Cooper noted. "But when they're charging for VMs that usually sit idle in the cloud, and we've purpose-built everything to fit much more density on these machines, you have a big opportunity."

How 30 employees built a platform generating tens of millions in annual revenue

Railway has achieved its scale with a team of just 30 employees generating tens of millions in annual revenue — a ratio of revenue per employee that would be exceptional even for established software companies. The company grew revenue 3.5 times last year and continues to expand at 15 percent month-over-month.

Cooper emphasized that the fundraise was strategic rather than necessary. "We're default alive; there's no reason for us to raise money," he said. "We raised because we see a massive opportunity to accelerate, not because we needed to survive."

The company hired its first salesperson only last year and employs just two solutions engineers. Nearly all of Railway's two million users discovered the platform through word of mouth — developers telling other developers about a tool that actually works.

"We basically did the standard engineering thing: if you build it, they will come," Cooper recalled. "And to some degree, they came."

From side projects to Fortune 500 deployments: Railway's unlikely corporate expansion

Despite its grassroots developer community, Railway has made significant inroads into large organizations. The company claims that 31 percent of Fortune 500 companies now use its platform, though deployments range from company-wide infrastructure to individual team projects.

Notable customers include Bilt, the loyalty program company; Intuit's GoCo subsidiary; TripAdvisor's Cruise Critic; and MGM Resorts. Kernel, a Y Combinator-backed startup providing AI infrastructure to over 1,000 companies, runs its entire customer-facing system on Railway for $444 per month.

"At my previous company Clever, which sold …

Konten dipersingkat otomatis.

🔗 Sumber: venturebeat.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna