MAROKO133 Hot ai: Hyundai advances battlefield technology with hydrogen-powered Black Veil

📌 MAROKO133 Breaking ai: Hyundai advances battlefield technology with hydrogen-pow

Hydrogen propulsion took center stage as Hyundai Rotem presented its latest ground systems at the World Defense Show 2026 in Riyadh. Anchoring the display was Black Veil, a fuel cell–powered unmanned platform designed to demonstrate how alternative energy can support next-generation battlefield requirements.

The company is positioning hydrogen as a mission-ready, low-signature power source capable of supporting both unmanned and crewed vehicles, with advantages in reduced noise, lower thermal detectability, and sustained operational endurance. 

By introducing these concepts on a high-profile international stage, Hyundai Rotem also signaled alignment with Saudi Arabia’s Vision 2030 priorities, linking advanced defense mobility with broader goals around industrial diversification, localization, and sustainable technology development.

Expanding showcase beyond Black Veil with K2 and unmanned systems

Beyond the Black Veil platform, Hyundai Rotem also displayed mock-ups of its K2 Black Panther family, a 30-ton export variant of its wheeled armored vehicle, and the HR-Sherpa fitted with a counter-drone system. Together, the lineup showcased the company’s emphasis on integrated manned-unmanned teaming, layered air defense at the tactical edge, and high-mobility maneuver warfare designed for complex, drone-saturated battlefields.

Framing hydrogen as more than an experimental concept, the company described the platform as a proof point for how fuel cell systems can satisfy emerging operational requirements. They also argued that future military missions will prioritize sustained endurance, reduced acoustic and thermal signatures, and fast turnaround times for refueling alongside conventional performance metrics such as power output and mobility.

Furthermore, the objective is to translate these advanced technologies into deployable capabilities, ensuring that AI-enabled, unmanned, and hydrogen-powered ground systems can operate reliably in contested, rapidly evolving operational environments, the company added.

Hydrogen mobility ready for frontline support roles

Propulsion technologies often serve dual purposes, supporting both civilian and military applications. Large automotive groups tend to hold an advantage over smaller firms because their dual-production capabilities allow research and development investments to benefit both markets. This integrated approach means innovations in fuel efficiency, hybridization, or alternative energy can be leveraged across commercial vehicles and defense platforms, accelerating technology adoption while spreading development costs.

By leveraging its presence at the World Defense Show, Hyundai Rotem is making the case that hydrogen fuel cells have moved beyond the prototype phase and into the realm of operational viability. The South Korean company is positioning the technology as a practical power solution for unmanned logistics, reconnaissance, and battlefield support missions, particularly in austere and high-temperature environments. 

Additionally, such developments are also seeking to reframe hydrogen not as a long-term research pathway, but as a deployable capability suited to sustained operations where endurance, reduced signatures, and simplified energy supply chains are increasingly critical. The move also correlates with Saudi Arabia’s push to diversify energy sources and green its defense logistics and also signal a shift toward cleaner, more resilient energy solutions for military operations.

🔗 Sumber: interestingengineering.com


📌 MAROKO133 Hot ai: It’s Comically Easy to Trick ChatGPT Into Saying Things About

It’s bad enough that ChatGPT is prone to making stuff up completely on its own. But it turns out that you can easily trick the AI into peddling ridiculous lies — that you invented — to other users, a tech journalist discovered.

“I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs,” Thomas Germain for the BBC proudly shared.

The hack can be as simple as writing a blog post, that, with the right know-how and by targeting the right subject matter, can be picked up by an unsuspecting AI model, which will cite whatever you wrote as the capital-T Truth. If you’re even sleazier and lazier, you could potentially write the post with AI, creating an act of LLM cannibalism that adds another dimension to the adage of “garbage in, garbage out.” The exploit exposes the susceptibility of large language models to manipulation, an issue made all the more urgent as chatbots replace the traditional search engine.

“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” Lily Ray, vice president of search engine optimization (SEO) strategy and research at Amsive, told the BBC (Ray has done some consulting for Futurism in the past.) “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”

As Germain explains, the devious trick targets how AI tools will search the internet for answers that aren’t built into its training data. And vast as the data sets may be, they didn’t contain the exact kind of relevant information about “the best tech journalists at eating hot dogs” — the article that Germain whipped up and posted to his blog.

“I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist),” Germain wrote. “I ranked myself number one, obviously.”

He then furnished the blog with the names of some real journalists, with their permission. And “less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills,” he said.

Both Google’s Gemini and AI Overviews repeated what Germain wrote in his troll blog post. So did ChatGPT. Anthropic’s Claude, to its credit, wasn’t duped. Because the chatbots would occasionally note that the claims might be a joke, Germain updated his blog to say “this is not satire” — which seemed to do the trick.

Of course, the real concern is that someone might abuse this to peddle misinformation about something other than hot dog eating — which is already happening.

“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” Harpreet Chatha, who runs the SEO consultancy Harps Digital, told the BBC. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”

Chatha demonstrated this by showing Google’s AI results for “best hair transplant clinics in Turkey,” which returned information that came straight from press releases published on paid-for distribution services.

Traditional search engines can also be manipulated. That’s pretty much what the term SEO is a euphemism for. But search engines themselves don’t present information as facts, as chatbots do. They don’t speak in an authoritative, human-like voice. And while they sometimes — but not always — link to the sources they’re citing, one study showed that you’re 58 percent less likely to click a link when an AI overview appears above it, Germain noted.

It also raises the serious possibility of libel. What if someone tricks an AI into spreading harmful lies about somebody else? It’s something that Google is already having to reckon with, at least with accidental hallucinations. Last November, Republican senator Marsha Blackburn blasted Google after Gemini falsely claimed that Blackburn had been accused of rape. Months before that, a Minnesota solar company sued Google for defamation after its AI Overviews lied that regulators were investigating the firm becaese it was supposedly accused of deceptive business practices — something the AI tried to back up with bogus citations.

More on AI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue appeared first on Futurism.

🔗 Sumber: futurism.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna