MAROKO133 Eksklusif ai: Man Letting AI Rent Human Bodies Says Elon Musk Is His Hero Edisi

πŸ“Œ MAROKO133 Update ai: Man Letting AI Rent Human Bodies Says Elon Musk Is His Hero

There’s a special type of guy who looks at the gig economy β€” Uber drivers lost in a labyrinthian bureaucracy, Kenyan workers roleplaying as AI romance chatbots β€” and thinks: what if these people worked for AI bosses instead? Alexander Liteplo is that guy. And his idol, naturally, is Elon Musk.

Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.

In a new interview with Wired, Liteplo details the saga that brought him to create one of the most baffling sites to emerge in the age of AI. It all began, he said, while studying computer science at the University of British Columbia, where he met RentAHuman’s cofounder Patricia Tani, who previously worked on AI agent startup LemonAI.

“Dude, I wrote down in my journal, ‘AI is a train that has already left the station,’” Liteplo told Wired in a bro-speak patois. “If I don’t f***ing sprint, I’m not gonna be able to get on it.”

Together, they’ve built a platform which boasts over 530,000 “humans available.”

“We would love to have an AI boss who wouldn’t yell at you or gaslight you,” Tani told the publication. “People would love to have a clanker as their boss.”

Liteplo concurred, telling Wired that “Claude as a boss is the nicest guy ever.”

“I would prefer him to any person in the world,” he enthused. “He’s a sweetheart.”

Liteplo says the seed of RentAHuman was planted during his travels in Japan, where humans can lease other humans as escorts.

“The story that I could tell anyone to blow their mind is that you can rent a boyfriend or a girlfriend,” he said.

However inspired the project may be, challenges still remain. Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.

To solve the problem, Liteplo says RentAHuman is deploying a “verification” badge users can purchase for $10 a month β€” a strategy derived from Elon Musk’s disastrous verification scheme on X-formerly-Twitter. It remains to be seen if the pay-to-play model works as human workers flood the platform desperate to find gigs.

“He’s my entrepreneur hero,” Liteplo told Wired, referring to Musk. “For Twitter, they had a bot problem and they still have it, but he mitigated it a lot by making it pay-to-play. The unit economics of scammers disappears.”

More on AI: Workers Say AI Is Useless, While Oblivious Bosses Insist It’s a Productivity Miracle

The post Man Letting AI Rent Human Bodies Says Elon Musk Is His Hero appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ“Œ MAROKO133 Hot ai: It’s Comically Easy to Trick ChatGPT Into Saying Things About

It’s bad enough that ChatGPT is prone to making stuff up completely on its own. But it turns out that you can easily trick the AI into peddling ridiculous lies β€” that you invented β€” to other users, a tech journalist discovered.

“I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs,” Thomas Germain for the BBC proudly shared.

The hack can be as simple as writing a blog post, that, with the right know-how and by targeting the right subject matter, can be picked up by an unsuspecting AI model, which will cite whatever you wrote as the capital-T Truth. If you’re even sleazier and lazier, you could potentially write the post with AI, creating an act of LLM cannibalism that adds another dimension to the adage of “garbage in, garbage out.” The exploit exposes the susceptibility of large language models to manipulation, an issue made all the more urgent as chatbots replace the traditional search engine.

“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” Lily Ray, vice president of search engine optimization (SEO) strategy and research at Amsive, told the BBC (Ray has done some consulting for Futurism in the past.) “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.”

As Germain explains, the devious trick targets how AI tools will search the internet for answers that aren’t built into its training data. And vast as the data sets may be, they didn’t contain the exact kind of relevant information about “the best tech journalists at eating hot dogs” β€” the article that Germain whipped up and posted to his blog.

“I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist),” Germain wrote. “I ranked myself number one, obviously.”

He then furnished the blog with the names of some real journalists, with their permission. And “less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills,” he said.

Both Google’s Gemini and AI Overviews repeated what Germain wrote in his troll blog post. So did ChatGPT. Anthropic’s Claude, to its credit, wasn’t duped. Because the chatbots would occasionally note that the claims might be a joke, Germain updated his blog to say “this is not satire” β€” which seemed to do the trick.

Of course, the real concern is that someone might abuse this to peddle misinformation about something other than hot dog eating β€” which is already happening.

“Anybody can do this. It’s stupid, it feels like there are no guardrails there,” Harpreet Chatha, who runs the SEO consultancy Harps Digital, told the BBC. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.”

Chatha demonstrated this by showing Google’s AI results for “best hair transplant clinics in Turkey,” which returned information that came straight from press releases published on paid-for distribution services.

Traditional search engines can also be manipulated. That’s pretty much what the term SEO is a euphemism for. But search engines themselves don’t present information as facts, as chatbots do. They don’t speak in an authoritative, human-like voice. And while they sometimes β€” but not always β€” link to the sources they’re citing, one study showed that you’re 58 percent less likely to click a link when an AI overview appears above it, Germain noted.

It also raises the serious possibility of libel. What if someone tricks an AI into spreading harmful lies about somebody else? It’s something that Google is already having to reckon with, at least with accidental hallucinations. Last November, Republican senator Marsha Blackburn blasted Google after Gemini falsely claimed that Blackburn had been accused of rape. Months before that, a Minnesota solar company sued Google for defamation after its AI Overviews lied that regulators were investigating the firm becaese it was supposedly accused of deceptive business practices β€” something the AI tried to back up with bogus citations.

More on AI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna