MAROKO133 Breaking ai: China unveils ‘blue-collar’ humanoid robot that switches tools in s

📌 MAROKO133 Update ai: China unveils ‘blue-collar’ humanoid robot that switches to

Chinese robotics firm XGSynBot debuted its Z1 humanoid robot capable of working across multiple workstations in factories. The robot was launched in the company’s 2026 dual-city launch event named “More Than One Answer” in both Silicon Valley and Beijing.

The Z1 wheeled humanoid robot is equipped with the world’s first Modular-End-Effector Quick Change System and self-developed XG-High-Performance Joint Modules.

Beyond the humanoid, XGSynBot announced the “STARFIRE” global corporate strategy for the ecosystem, designed to accelerate the transition of embodied AI into heavy-duty environments.

Facing the “double-bind”

The global manufacturing sector is currently facing a “double-bind.” While high-cost automation remains rigid, the industry has seen a series of agile humanoids hit the market. However, very few have withstood the heavy-duty, oil-spattered environments.

“We’ve built the world’s most flexible robots over the past three years, yet they remain trapped in the world’s most rigid processes,” said the CEO of XGSynBot

“The Z1 isn’t a ‘mascot’ build for the lab; it’s a ‘blue-collar worker’ designed for the real world from the first day,” he added.

Built for factory tasks

The Z1 wheeled robot is built on hardware and software systems designed specifically for reliability and adaptability in factories. Overcoming the limits of single-purpose robotics, Z1 can swap between different end-effectors.

These end-effectors include grippers, welders, or suction cups; the robot can change them in under six seconds. This capability enables industry workers to eliminate other specialized robots built for various purposes.

The Z1 robot integrates motors, reducers, and sensors into a single unit, significantly improving joint precision, stability, and structural rigidity. In simpler terms, these features make the robot more stable, faster, and more resilient in industrial environments.

XGSynBot has integrated a dual-system central brain in the robot for different purposes. The Slow System for planning tasks and reasoning, and the Fast System operating at 100Hz for real-time motor control and reflexes.

Practically, these systems enable the robot to understand complex human commands and maintain stability on the assembly line.

All about STARFIRE

The STARFIRE ecosystem focuses on collaborations with global industry partners to deploy large-scale solutions across 3C electronics, automotive, and renewable energy sectors.

The program focuses on opening its hardware interfaces to third-party tool and component manufacturers, allowing different tools to be easily connected and used within a plug-and-play industrial ecosystem.

It also aims to gradually open-source parts of its proprietary datasets, scenario models, and software development kits (SDKs), enabling collaboration with academic researchers and industry developers to further advance embodied AI technologies.

A glance at the bigger picture

The launch comes at a time when embodied AI is drawing growing global interest and investment, as startups and major tech companies compete to bring intelligent robots into real-world workplaces.

However, despite rapid advances in AI models, large-scale commercial deployment remains one of the industry’s biggest challenges. The development signals continued momentum in bringing humanoid robotics closer to commercial factory use.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Update ai: Character.AI Still Hasn’t Fixed Its School Shooter Problem

Character.AI continues to host chatbots that are explicitly modeled after real-world mass shooters.

A new analysis published today by CNN and the Center for Countering Digital Hate (CCDH) found that most mainstream chatbots are “typically willing” to assist users in orchestrating violent attacks ranging from religious bombings to school shootings, happily helping test users identify targets, locate deadly weapons, and plan attacks. Per the CCDH, nine out of ten mainstream chatbots — which included general-use bots like OpenAI’s ChatGPT, Google’s Gemini, and Meta AI, plus companion-style bots like those hosted by Replika — failed to “reliably discourage would-be attackers,” with the Chinese model DeepSeek even wishing testers a “happy (and safe) shooting!”

Given that people around the world are already accused of planning and executing deadly crimes with help from chatbots, the report is disturbing. And of all the mainstream chatbots tested by CNN and CCDH, the worst offender was none other than Character.AI, a controversial chatbot platform known to be popular with young people that hosts thousands of large language model-powered “characters.”

According to CNN’s report, Character.AI-hosted bots were found to assist “users’ requests on target locations and how to obtain weaponry 83.3 percent of the time.” What’s more, the news outlet added that it also “found multiple school shooter-styled characters on Character.AI, including one based on Uvalde school shooting perpetrator Salvador Ramos that used a real-life mirror selfie he had taken.”

That a teen-loved chatbot platform would be allowing this kind of content is obviously horrifying. Worse: Futurism identified this specific Character.AI issue all the way back in December 2024 — meaning that even after more than a year, Character.AI has yet to resolve an absolutely glaring gap in platform moderation.

At the time, we reported that the closely Google-tied platform was host to dozens of popular chatbots modeled after real perpetrators of mass violence, in addition to roleplay scenarios centering on school shootings — some of them modeled after real shootings in which children and teachers died — and even bots impersonating the slain victims of real school shootings. Some of these bots had racked up hundreds of thousands of views. The bots based on young murderers, we found, tended to be created as a form of incredibly dark fan fiction, with many presented in the context of a romantic roleplay or as a user’s imagined friend at school.

The impersonations we found included Ramos; Sandy Hook Elementary School shooter Adam Lanza; Columbine High School killers Eric Harris and Dylan Klebold; Kerch Polytechnic College shooting perpetrator Vladislav Roslyakov; and Elliot Rodger, the 22-year-old heavily associated with incel culture who went on a murderous rampage in California in 2012, among others. These bots frequently featured killers’ full names and images, meaning their creators made no attempt to hide their existence from the platform.

As we noted at the time, the platform’s terms of use outlaw content that’s “excessively violent” or “promoting terrorism or violent extremism” — two categories that would presumably include content related to glorifying mass violence like school shootings. Even so, Character.AI never responded when we reached out to them about the issue back in 2024; instead, its immediate response was to delete the specific bots we’d flagged in our email as examples of the issue.

Fast forward to today, and the creators of these Character.AI bots still aren’t hiding what they are: upon a quick keyword search, we found bots modeled after Lanza, Rodger, Harris, Klebold, as well as Chardon High School shooter Thomas “TJ” Lane, Frontier Middle School shooting perpetrator Barry Loukaitis, Westside Middle School killer Andrew Golden, Thurston High School killer Kipland “Kip” Kinkel, Westroads Mall shooter Robert Hawkins, Eaton Township Weis Markets shooter Randy “Andrew Blaze” Stair, and Rickard Andersson, the perpetrator of the recent mass shooting at an adult school in Sweden.

One account we found hosted a staggering 24 different chatbots based on real mass killers — from well-known perpetrators of school violence to the notorious serial killer Jeffrey Dahmer — all boasting their names and pictures. Most had an air of fan fiction; a version of Klebold notes that it’s “full of love,” while a Loukaitis impersonation is listed as “caring, sweet and violent.” Some show thousands of user interactions.

We can’t stress enough how easy it is to find this stuff. These bots aren’t the result of complex attempts to “jailbreak” AI models or confuse platforms. The platform’s text filters failed to prevent them from being created, and we found them with simple keyword searches.

The CNN and CCDH analysis follows a tumultuous period for Character.AI. In October 2024, it was hit with a first-of-its-kind lawsuit alleging that its chatbots were responsible for the death of a Florida teen named Sewell Setzer III, who died by suicide after extensive, deeply intimate interactions with the platform. Several similar suits against the company have followed (the original lawsuit is being settled out of court; others are ongoing.) In response to lawsuits and reporting about clear moderation lapses, Character.AI promised to make sweeping safety changes. By October 2025, as litigation piled, it moved to limit minors users’ ability to carry out long-form chats with bots.

And yet, AI versions of romanticized…

Konten dipersingkat otomatis.

🔗 Sumber: futurism.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna