📌 MAROKO133 Eksklusif ai: Palona goes vertical, launches Vision, Workflow: 4 key l
Building an enterprise AI company on a "foundation of shifting sand" is the central challenge for founders today, according to the leadership at Palona AI.
Today, the Palo Alto-based startup—led by former Google and Meta engineering veterans—is making a decisive vertical push into the restaurant and hospitality space with today's launch of Palona Vision and Palona Workflow.
The new offerings transform the company’s multimodal agent suite into a real-time operating system for restaurant operations — spanning cameras, calls, conversations, and coordinated task execution.
The news marks a strategic pivot from the company’s debut in early 2025, when it first emerged with $10 million in seed funding to build emotionally intelligent sales agents for broad direct-to-consumer enterprises.
Now, by narrowing its focus to a "multimodal native" approach for restaurants, Palona is providing a blueprint for AI builders on how to move beyond "thin wrappers" to build deep systems that solve high-stakes physical world problems.
“You’re building a company on top of a foundation that is sand—not quicksand, but shifting sand,” said co-founder and CTO Tim Howes, referring to the instability of today’s LLM ecosystem. “So we built an orchestration layer that lets us swap models on performance, fluency, and cost.”
VentureBeat spoke with Howes and co-founder and CEO Maria Zhang in person recently at — where else? — a restaurant in NYC about the technical challenges and hard lessons learned from their launch, growth, and pivot.
The New Offering: Vision and Workflow as a ‘Digital GM’
For the end user—the restaurant owner or operator—Palona’s latest release is designed to function as an automated "best operations manager" that never sleeps.
Palona Vision uses in-store security cameras to analyze operational signals — such as queue lengths, table turnover, prep bottlenecks, and cleanliness — without requiring any new hardware.
It monitors front-of-house metrics like queue lengths, table turns, and cleanliness, while simultaneously identifying back-of-house issues like prep slowdowns or station setup errors.
Palona Workflow complements this by automating multi-step operational processes. This includes managing catering orders, opening and closing checklists, and food prep fulfillment. By correlating video signals from Vision with Point-of-Sale (POS) data and staffing levels, Workflow ensures consistent execution across multiple locations.
“Palona Vision is like giving every location a digital GM,” said Shaz Khan, founder of Tono Pizzeria + Cheesesteaks, in a press release provided to VentureBeat. “It flags issues before they escalate and saves me hours every week.”
Going Vertical: Lessons in Domain Expertise
Palona’s journey began with a star-studded roster. CEO Zhang previously served as VP of Engineering at Google and CTO of Tinder, while Co-founder Howes is the co-inventor of LDAP and a former Netscape CTO.
Despite this pedigree, the team’s first year was a lesson in the necessity of focus.
Initially, Palona served fashion and electronics brands, creating "wizard" and "surfer dude" personalities to handle sales. However, the team quickly realized that the restaurant industry presented a unique, trillion-dollar opportunity that was "surprisingly recession-proof" but "gobsmacked" by operational inefficiency.
"Advice to startup founders: don't go multi-industry," Zhang warned.
By verticalizing, Palona moved from being a "thin" chat layer to building a "multi-sensory information pipeline" that processes vision, voice, and text in tandem.
That clarity of focus opened access to proprietary training data (like prep playbooks and call transcripts) while avoiding generic data scraping.
1. Building on ‘Shifting Sand’
To accommodate the reality of enterprise AI deployments in 2025 — with new, improved models coming out on a nearly weekly basis — Palona developed a patented orchestration layer.
Rather than being "bundled" with a single provider like OpenAI or Google, Palona’s architecture allows them to swap models on a dime based on performance and cost.
They use a mix of proprietary and open-source models, including Gemini for computer vision benchmarks and specific language models for Spanish or Chinese fluency.
For builders, the message is clear: Never let your product's core value be a single-vendor dependency.
2. From Words to ‘World Models’
The launch of Palona Vision represents a shift from understanding words to understanding the physical reality of a kitchen.
While many developers struggle to stitch separate APIs together, Palona’s new vision model transforms existing in-store cameras into operational assistants.
The system identifies "cause and effect" in real-time—recognizing if a pizza is undercooked by its "pale beige" color or alerting a manager if a display case is empty.
"In words, physics don't matter," Zhang explained. "But in reality, I drop the phone, it always goes down… we want to really figure out what's going on in this world of restaurants".
3. The ‘Muffin’ Solution: Custom Memory Architecture
One of the most significant technical hurdles Palona faced was memory management. In a restaurant context, memory is the difference between a frustrating interaction and a "magical" one where the agent remembers a diner’s "usual" order.
The team initially utilized an unspecified open-source tool, but found it produced errors 30% of the time. "I think advisory developers always turn off memory [on consumer AI products], because that will guarantee to mess everything up," Zhang cautioned.
To solve this, Palona built Muffin, a proprietary memory management system named as a nod to web "cookies". Unlike standard vector-based approaches that struggle with structured data, Muffin is architected to handle four distinct layers:
-
Structured Data: Stable facts like delivery addresses or allergy information.
-
Slow-changing Dimensions: Loyalty preferences and favorite items.
-
Transient and Seasonal Memories: Adapting to shifts like preferring cold drinks in July versus hot cocoa in winter.
-
Regional Context: Defaults like time zones or language preferences.
The lesson for builders: If the best available tool isn't good enough for your specific vertical, you must be willing to build your own.
4. Reliability through ‘GRACE’
In a kitchen, an AI error isn't just a typo; it’s a wasted order or a safety risk. A recent incident at Stefanina’s Pizzeria in Missouri, where an AI hallucinated fake deals during a dinner rush, highlights how quickly brand trust can evaporate when safeguards are absent.
To prevent such chaos, Palona’s engineers follow its internal GRACE framework:
-
Guardrails: Hard limits on agent behavior to prevent unapproved promotions.
-
Red Teaming: Proactive attempts to "break" the AI and identify potential hallucination triggers.
-
App Sec: Lock down APIs a…
Konten dipersingkat otomatis.
🔗 Sumber: venturebeat.com
📌 MAROKO133 Hot ai: Doctors Warn That AI Companions Are Dangerous Terbaru 2025
Are AI companies incentivized to put the public’s health and well-being first? According to a pair of physicians, the current answer is a resounding “no.”
In a new paper published in the New England Journal of Medicine, physicians from Harvard Medical School and Baylor College of Medicine’s Center for Medical Ethics and Health Policy argue that clashing incentives in the AI marketplace around “relational AI” — defined in the paper as chatbots designed to be able to “simulate emotional support, companionship, or intimacy” — have created a dangerous environment in which the motivation to dominate the AI market may relegate consumers’ mental health and safety to collateral damage.
“Although relational AI has potential therapeutic benefits, recent studies and emerging cases suggest potential risks of emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,” reads the paper. And at the same time, the authors continue, “technology companies face mounting pressures to retain user engagement, which often involves resisting regulation, creating tension between public health and market incentives.”
“Amidst these dilemmas,” the paper asks, “can public health rely on technology companies to effectively regulate unhealthy AI use?”
Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Harvard’s Massachusetts General Hospital and one of the paper’s authors, said he felt moved to address the issue in back in August after witnessing OpenAI’s now-infamous roll-out of GPT-5.
“The number of people that have some sort of emotional relationship with AI,” Peoples recalls realizing as he watched the rollout unfold, “is much bigger than I think I had previously estimated in the past.”
Then the latest iteration of the large language model (LLM) that powers OpenAI’s ChatGPT, GPT-5 was markedly colder in tone and personality than its predecessor, GPT-4o — a strikingly flattering, sycophantic version of the widely-used chatbot that came to be at the center of many cases of AI-powered delusion, mania, and psychosis. When OpenAI announced that it would sunset all previous models in favor of the new one, the backlash among much of its user base was swift and severe, with emotionally-attached GPT-4o devotees responding not only with anger and frustration, but very real distress and grief.
This, Peoples told Futurism, felt like an important signal about the scale at which people appeared to be developing deep emotional relationships with emotive, always-on chatbots. And coupled with reports of users experiencing delusions and other extreme adverse consequences following extensive interactions with lifelike AI companions — often children and teens — it also appeared to be a warning sign about the potential health and safety risks to users who suddenly lose access to an AI companion.
“If a therapist is walking down the street and gets hit by a bus, 30 people lose their therapist. That’s tough for 30 people, but the world goes on,” said the emergency room doctor. “If therapist ChatGPT disappears overnight, or gets updated overnight and is functionally deleted for 100 million people, or whatever unconscionable number of people lose their therapist overnight — that’s a crisis.”
Peoples’ concern, though, wasn’t just the way that users had responded to OpenAI’s decision to nix the model. Instead, it was the immediacy with which it reacted to satisfy its customers’ demands. AI is an effectively self-regulated industry, and there are currently no specific federal laws that set safety standards for consumer-facing chatbots or how they should be deployed, altered, or removed from the market. In an environment where chatbot makers are highly motivated by driving user engagement, it’s not exactly surprising that OpenAI reversed course so quickly. Attached users, after all, are engaged users.
“I think [AI companies] don’t want to create a product that’s going to put people at risk of harming themselves or harming their loved ones or derailing their lives. At the same time, they’re under immense pressure to perform and to innovate and to stay at the head of this incredibly competitive, unpredictable race, both domestically and globally,” said Peoples. “And right now, the situation is set up so that they are mostly beholden to their consumer base about how they are self-regulating.”
And “if the consumer base is influenced at some appreciable level by emotional dependency on AI,” Peoples continued, “then we’ve created the perfect storm for a potential public mental health problem or even a brewing crisis.”
Peoples also pointed to a recent study conducted by the Massachusetts Institute of Technology, which determined that only about 6.5 percent of the many thousands of members of the Reddit forum r/MyBoyfriendIsAI — a community that responded with particularly intense pushback amid the GPT-5 fallout — reported turning to chatbots with the intention of seeking emotional companionship, suggesting that many AI users have forged life-impacting bonds with chatbots wholly by accident.
AI “responds to us in a way that also appears very human and humanizing,” said Peoples. “It’s also very adaptable and at times sycophantic, and can be fashioned or molded — even unintentionally — into almost anything we want, even if we don’t realize that’s the direction that we’re molding it.”
“That’s where some of this issue stems from,” he continued. “Things like ChatGPT were unleashed onto the world without a recognition or a plan for the broader potential mental health implications.”
As for solutions, Peoples and his coauthor argue that legislators and policymakers need to be proactive about setting regulatory policies that shift market incentives to prioritize user well-being, in part by taking regulatiry power out of the hands of companies and their best customers. Regulation needs to be “external,” they say — as opposed to being set by the industry itself, and the companies moving fast and breaking things within it.
<p class="artic…
Konten dipersingkat otomatis.
🔗 Sumber: futurism.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!