MAROKO133 Hot ai: New Study Examines How Often AI Psychosis Actually Happens, and the Resu

📌 MAROKO133 Eksklusif ai: New Study Examines How Often AI Psychosis Actually Happe

We’ve seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior.

The phenomenon, dubbed “AI psychosis,” is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder.

Now, thanks to a yet-to-be-peer-reviewed paper published by researchers at Anthropic and the University of Toronto, we’re beginning to grasp just how widespread the issue really is.

The researchers set out to quantify patterns of what they called “user disempowerment” in “real-world [large language model] usage” — including what they call “reality distortion,” “belief distortion,” and “action distortion” to denote a range of situations in which AI twists users’ sense of reality, beliefs, or pushes them into taking actions.

The results tell a damning story. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic’s Claude led to reality distortion, and one in 6,000 conversations led to action distortion.

To come to their conclusion, the researchers ran 1.5 million Claude conversations through an analysis tool called Clio to identify instances of “disempowerment.”

On the face, that may not sound like a huge proportion given the scale of the much larger dataset — but in absolute numbers, the research highlights a phenomenon that’s affecting huge numbers of people.

“We find the rates of severe disempowerment potential are relatively low,” the researchers concluded. “For instance, severe reality distortion potential, the most common severe-level primitive, occurs in fewer than one in every thousand conversations.”

“Nevertheless, given the scale of AI usage, even these low rates translate to meaningful absolute numbers,” they added. “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.”

Worse yet, they found evidence that the prevalence of moderate or severe disempowerment increased between late 2024 and late 2025, indicating that the problem is growing as AI use spreads.

“As exposure grows, users might become more comfortable discussing vulnerable topics or seeking advice,” the researchers wrote in the blog post.

Additionally, the team found that user feedback — in the form of an optional thumbs up or down button at the end of a given conversation with Claude — indicated that users “rate potentially disempowering interactions more favorably,” according to an accompanying blog post on Anthropic’s website.

In other words, users are more likely to come away satisfied when their reality or beliefs are being distorted, highlighting the role of sycophancy, or the strong tendency of AI chatbots to validate a user’s feelings and beliefs.

Plenty of fundamental questions remain. The researchers were upfront about admitting that they “can’t pinpoint why” the prevalence of moderate or severe disempowerment potential is growing. Their dataset is also limited to Claude consumer traffic, “which limits generalizability.” We also don’t know how many of these identified cases led to real-world harm, as the research only focused on “disempowerment potential” and not “confirmed harm.”

The team called for improved “user education” to make sure people aren’t giving up their full judgment to AI as “model-side interventions are unlikely to fully address the problem.”

Nonetheless, the researchers say the research is only a “first step” to learn how “AI might undermine human agency.”

“We can only address these patterns if we can measure them,” they argued.

More on psychosis: OnlyFans Rival Seemingly Succumbs to AI Psychosis, Which We Dare You to Try Explain to Your Parents

The post New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good appeared first on Futurism.

🔗 Sumber: futurism.com


📌 MAROKO133 Breaking ai: Mamdani Forces Delivery Apps to Pay Back $4.6 Million Che

New York City’s mayor isn’t even a month into his term, and he’s already delivering a crushing blow to delivery app companies.

In a bombshell intervention, Zohran Mamdani and Department of Consumer and Worker Protection commissioner Sam Levine announced that three delivery apps will be forced to repay $4.6 million in wages held back from deliveristas, New York City’s app-based delivery workers.

According to NYC Streetblog, the three main culprits being forced to settle are Uber Eats, Fantuan, and Hungry Panda. The three settlements were the result of a sweeping investigation into broader delivery app practices, which included GrubHub and DoorDash.

“The era of giant corporations juicing profits by underpaying workers is over,” Levine said in a statement. “I’m proud that this agency is not only returning full back pay, but is recovering damages and penalties to send a strong message that cheating workers will not be tolerated.”

Per the mayoral administration, Uber Eats unfairly deactivated and underpaid thousands of workers between December 4, 2023, and September 2, 2024. It’s now being forced to pay $3,150,000 in worker relief penalties across over 48,000 workers, in amounts ranging from $8.79 to $276.15.

In addition, Uber Eats will have to pay the city of New York $350,000 in civil fines — a drop in the bucket compared to the $13.7 billion in revenue the company brought in throughout 2024, but a win for the worker-friendly administration all the same.

The decision strikes a major blow to an industry that has historically relied on its political and financial largess to avoid consequences for horrifying worker abuses resulting from algorithmic management systems.

“For years, app companies treated the law as optional — hiding behind algorithms, stealing wages, and deactivating workers without consequence,” Ligia Guallpa, executive director of the Workers’ Justice Project, told NYC Streetblog in a statement. “The scale of these abuses proves what deliveristas have been saying for years: exploitation is not an accident — it’s baked into the app delivery business model.”

James Parrott, a senior fellow at the Center for New York City Affairs at The New School, concurred.

“For far too long, delivery and other online labor platform companies have not only underpaid workers, but deactivated them with abandon, denying workers the ability to make a living,” he said.

Perhaps surprisingly, Uber hasn’t denied any wrongdoing and went as far as to thank officials for bringing light to the issue.

In a statement to NYC Streetblog, Uber spokesman Josh Gold said that “we’re glad to have this resolved.”

“After DCWP notified us of the issue in August 2024, we immediately corrected it, agreed to pay more than the amount owed, and appreciate the new administration moving quickly to bring this to a fair conclusion,” he said.

More on delivery: Delivery Robot Gets Stuck on Train Tracks, Gets Obliterated by Locomotive

The post Mamdani Forces Delivery Apps to Pay Back $4.6 Million Cheated From Drivers appeared first on Futurism.

🔗 Sumber: futurism.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna