📌 MAROKO133 Hot ai: DMV’s AI System Says Woman Doesn’t Have a Human Face Hari Ini
When it comes to inclusion, artificial intelligence doesn’t have the best track record. As vast algorithms trained on troves of data scraped from the internet, AI models are inherently predisposed to reproducing human social biases.
It’s no surprise, then, that AI has a real knack for discrimination, often exacerbating prejudice on the basis of race, gender, and sexuality. And as one woman discovered during a demeaning trip to the DMV, AI systems are also quite capable of discriminating against those with disabilities.
In a story first reported by Wired, Connecticut native Autumn Gardiner’s task seemed easy, mundane — she had recently gotten married, and was at the DMV to update her license. To do so, officials needed to take a new photo, a simple process which quickly turned into a nightmare thanks to the state government’s AI-powered ID verification program.
Gardiner, who lives with Freeman-Sheldon syndrome — a rare genetic disorder affecting muscles around the face, particularly the mouth — says that one by one, her photos were all rejected by the DMV’s ID software. It became a spectacle, she told Wired. “Everyone’s watching,” she said. “They’re taking more photos.”
“It was humiliating and weird,” she continued. “Here’s this machine telling me that I don’t have a human face.”
Freeman-Sheldon causes what’s referred to as a visible difference. While there is no authoritative list defining what is an isn’t a visible difference, the advocacy group Changing Faces describes it as “a scar, mark, or condition that makes you look different.” This can include anyone with birthmarks, burns, cancer, craniofacial conditions, hair loss, skin conditions like vitiligo, or inherited conditions like neurofibromatosis.
Around half a dozen people with visible differences spoke to Wired to chronicle the ways AI software is increasingly complicating their lives. The frustrations are endless, ranging from social media selfie filters to facial verification for banking apps.
“In many countries, facial recognition is increasingly a part of everyday life, but this technology is failing our community,” Nikki Lilly, a representative of the group Face Equality International testified in front of the United Nations earlier this year.
As more and more part of life become locked away behind these systems, it’s a serious question: who benefits from it, and whose life is made more challenging?
More on facial recognition: Police Use Busted Facial Recognition System, Arrest Random Man and Accuse Him of Horrible Crime
The post DMV’s AI System Says Woman Doesn’t Have a Human Face appeared first on Futurism.
🔗 Sumber: futurism.com
📌 MAROKO133 Hot ai: China’s new humanoid robot handles raw egg with world-first cr
Agibot has officially launched its next-generation industrial-grade embodied robot, the Agibot G2, specifically for commercial environments.
The G2 humanoid integrates advanced motion systems, multimodal AI interaction, and autonomous operation capabilities to support diverse applications, from factory production and logistics to guided tours.
“We envision Agibot G2 relieving humans from repetitive, labor-intensive, and safety-risk prone work, enabling people to focus on more creative tasks” said Dr. Yao Maoqing, Partner at Agibot, Senior Vice President, and President of the Embodied Business Unit, in a company press statement.
Enhanced design for industrial operations
The G2 represents a major upgrade over the model Agibot introduced in November 2023. Notably, it brings together three hardware advancements that make it a new benchmark for industrial-grade embodied robots.
Equipped with high-performance joint actuators and multiple types of sensors, it offers full-scene omnidirectional obstacle avoidance, ensuring smooth movement and safe interaction in dynamic environments.
Another key feature of the new design is its 3-degree-of-freedom waist, which allows the robot to perform human-like movements such as bending, twisting, and side-swaying with precision. This design is complemented by what Agibot calls the world’s first cross-shaped wrist force-controlled arm.
The arm uses high-precision torque sensors distributed across its length to detect external forces in real time, adjusting its motion through impedance control for smoother, more natural operations.
The G2 has a dual-battery hot-swappable system and autonomous charging capability, enabling uninterrupted 24/7 operation, a critical feature for production lines that must go on. Its rapid deployment toolchain further simplifies setup, allowing even non-specialists to configure and deploy the robot quickly.
Smarter AI interaction and onboarding
What sets the Agibot G2 apart is its combination of embodied intelligence and advanced AI systems. Unlike traditional industrial robots that perform pre-programmed, one-way operations, the G2 emphasizes adaptive, interactive performance powered by Agibot’s proprietary large models GO-1 and GE-1.
GO-1 is structured with a “three-layer brain” comprising a Vision-Language Model (VLM) for perception, a Latent Planner for task planning, and an Action Expert for execution. This layered design allows the robot to “understand a single command and complete an entire task,” significantly improving efficiency and usability.
The GE-1 model complements this by introducing predictive abilities that allow the robot to “rehearse” its actions in a virtual environment, making it capable of handling long-sequence and complex operations with greater foresight.
These AI capabilities are supported by the NVIDIA Jetson Thor T5000 platform, which provides up to 2070 TFLOPS (FP4) of onboard computing power.
This allows real-time decision-making with latency under 10 milliseconds, enabling the G2 to process multiple sensor streams locally and run large-scale AI models directly on the device. Developers can train and test AI models in virtual environments before deploying them to physical robots, which would cut development and testing time dramatically.
Tested reliability and early deployments
Agibot says that the wheeled humanoid, G2 has undergone over 130 component tests and environmental trials, including temperature extremes from -15°C to 50°C and electrostatic protection, ensuring durability in industrial conditions.
The robot has already demonstrated its capabilities across multiple real-world scenarios during live demonstrations, according to the press release.
In automotive parts production, it collaborated with humans on assembly and material handling tasks. For precision electronics, it completed delicate operations like RAM insertion within an hour of AI-assisted training. In logistics, its dexterous OmniHand allowed it to grasp parcels of different shapes and materials while navigating factory floors autonomously with mobility suited to 95% industrial environments.
The G2 uses expressive, human-like gestures and 360-degree perception for interactive, safe engagements. The robot also offers SDK interfaces for customization, making it highly adaptable. Following successful trials, Agibot G2 is now formally deploying in automotive parts manufacturing and has already been adopted in consumer electronics precision production.
The company continues to develop its “1 Ontology + 3 Intelligence” framework, combining robot hardware with interaction, manipulation, and locomotion intelligence, to build a complete embodied robotics ecosystem.
Agibot aims to expand the G2’s applications into sectors such as security, inspection, education, and research, as part of its broader mission to integrate AI and robotics across industries.
🔗 Sumber: interestingengineering.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!