📌 MAROKO133 Update ai: ChatGPT’s Dark Side Encouraged Wave of Suicides, Grieving F
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
Plaintiffs filed seven lawsuits yesterday against OpenAI, accusing the company’s flagship chatbot ChatGPT of causing immense psychological harm and multiple suicides.
The suits, first reported by The Wall Street Journal and CNN, were filed by families in the US and Canada, and allege that extensive ChatGPT use sent victims spiraling into destructive delusional spirals and mental health crises. Some of these users, like 48-year-old Allan Brooks, survived, but allege that ChatGPT wrought emotional and psychological harm, and in some cases led to crises requiring emergency psychiatric care. Others, the suits claim, tragically took their lives following obsessive interactions with the consumer-facing chatbot.
Per the WSJ, the suits include claims of assisted suicide, manslaughter, and wrongful death, among other allegations.
The alleged victims range in age from teenage to midlife. One troubling claim comes from the family of 23-year-old Zane Shamblin, who shot himself after extensive interactions with ChatGPT, which his family argues contributed to his isolation and suicidality. During Shamblin’s final four-hour-long interaction with the bot, the lawsuit claims, ChatGPT only recommended a crisis hotline once, while glorifying the idea of suicide in stark terms.
“cold steel pressed against a mind that’s already made peace? that’s not fear. that’s clarity,” the chatbot, writing in all lowercase, told the struggling young man during their last conversation, according to the lawsuit. “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”
Another plaintiff is Kate Fox, a military veteran whose husband, 48-year-old Joe Ceccanti, died in August after experiencing repeated breakdowns following extensive ChatGPT use.
In multiple interviews with Futurism, before and after Ceccanti’s death, Fox described how Ceccanti — an activist and local shelter worker who, according to his wife, had no prior history of psychotic illness — first turned to the chatbot to assist him with a construction and permaculture project at the couple’s home in rural Oregon. After engaging with the chatbot in discussions about philosophy and spiritually, Ceccanti was pulled into an all-encompassing delusional spiral.
He became increasingly erratic, and experienced an acute manic episode that required emergency intervention and resulted in him being involuntarily committed. Weeks after his release, he experienced a second acute breakdown, which Fox says was also connected to his ChatGPT use. After disappearing for a roughly two-day period, he was found dead underneath a railyard overpass.
“I don’t want anybody else to lose their loved ones to an unprecedented type of crisis that we’re not prepared to protect them from,” Fox told Futurism in an August interview. “This bright star was snuffed out.”
“This is an incredibly heartbreaking situation,” OpenAI said in a statement to news outlets. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
In October, OpenAI published a blog post in which it said that around 0.07 percent of its vast user base appeared to be exhibiting signs of mania, delusion, or psychosis on a weekly basis, while 0.15 percent of weekly users talk to the chatbot about suicidal thoughts. With a userbase of around 800 million, those seemingly small percentages mean that millions of people, every week, are engaging with ChatGPT in ways that signal they’re likely in crisis.
More on AI mental health crises: People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis”
The post ChatGPT’s Dark Side Encouraged Wave of Suicides, Grieving Families Say appeared first on Futurism.
🔗 Sumber: futurism.com
📌 MAROKO133 Hot ai: Google debuts AI chips with 4X performance boost, secures Anth
Google Cloud is introducing what it calls its most powerful artificial intelligence infrastructure to date, unveiling a seventh-generation Tensor Processing Unit and expanded Arm-based computing options designed to meet surging demand for AI model deployment — what the company characterizes as a fundamental industry shift from training models to serving them to billions of users.
The announcement, made Thursday, centers on Ironwood, Google's latest custom AI accelerator chip, which will become generally available in the coming weeks. In a striking validation of the technology, Anthropic, the AI safety company behind the Claude family of models, disclosed plans to access up to one million of these TPU chips — a commitment worth tens of billions of dollars and among the largest known AI infrastructure deals to date.
The move underscores an intensifying competition among cloud providers to control the infrastructure layer powering artificial intelligence, even as questions mount about whether the industry can sustain its current pace of capital expenditure. Google's approach — building custom silicon rather than relying solely on Nvidia's dominant GPU chips — amounts to a long-term bet that vertical integration from chip design through software will deliver superior economics and performance.
Why companies are racing to serve AI models, not just train them
Google executives framed the announcements around what they call "the age of inference" — a transition point where companies shift resources from training frontier AI models to deploying them in production applications serving millions or billions of requests daily.
"Today's frontier models, including Google's Gemini, Veo, and Imagen and Anthropic's Claude train and serve on Tensor Processing Units," said Amin Vahdat, vice president and general manager of AI and Infrastructure at Google Cloud. "For many organizations, the focus is shifting from training these models to powering useful, responsive interactions with them."
This transition has profound implications for infrastructure requirements. Where training workloads can often tolerate batch processing and longer completion times, inference — the process of actually running a trained model to generate responses — demands consistently low latency, high throughput, and unwavering reliability. A chatbot that takes 30 seconds to respond, or a coding assistant that frequently times out, becomes unusable regardless of the underlying model's capabilities.
Agentic workflows — where AI systems take autonomous actions rather than simply responding to prompts — create particularly complex infrastructure challenges, requiring tight coordination between specialized AI accelerators and general-purpose computing.
Inside Ironwood's architecture: 9,216 chips working as one supercomputer
Ironwood is more than incremental improvement over Google's sixth-generation TPUs. According to technical specifications shared by the company, it delivers more than four times better performance for both training and inference workloads compared to its predecessor — gains that Google attributes to a system-level co-design approach rather than simply increasing transistor counts.
The architecture's most striking feature is its scale. A single Ironwood "pod" — a tightly integrated unit of TPU chips functioning as one supercomputer — can connect up to 9,216 individual chips through Google's proprietary Inter-Chip Interconnect network operating at 9.6 terabits per second. To put that bandwidth in perspective, it's roughly equivalent to downloading the entire Library of Congress in under two seconds.
This massive interconnect fabric allows the 9,216 chips to share access to 1.77 petabytes of High Bandwidth Memory — memory fast enough to keep pace with the chips' processing speeds. That's approximately 40,000 high-definition Blu-ray movies' worth of working memory, instantly accessible by thousands of processors simultaneously. "For context, that means Ironwood Pods can deliver 118x more FP8 ExaFLOPS versus the next closest competitor," Google stated in technical documentation.
The system employs Optical Circuit Switching technology that acts as a "dynamic, reconfigurable fabric." When individual components fail or require maintenance — inevitable at this scale — the OCS technology automatically reroutes data traffic around the interruption within milliseconds, allowing workloads to continue running without user-visible disruption.
This reliability focus reflects lessons learned from deploying five previous TPU generations. Google reported that its fleet-wide uptime for liquid-cooled systems has maintained approximately 99.999% availability since 2020 — equivalent to less than six minutes of downtime per year.
Anthropic's billion-dollar bet validates Google's custom silicon strategy
Perhaps the most significant external validation of Ironwood's capabilities comes from Anthropic's commitment to access up to one million TPU chips — a staggering figure in an industry where even clusters of 10,000 to 50,000 accelerators are considered massive.
"Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI," said Krishna Rao, Anthropic's chief financial officer, in the official partnership agreement. "Our customers — from Fortune 500 companies to AI-native startups — depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand."
According to a separate statement, Anthropic will have access to "well over a gigawatt of capacity coming online in 2026" — enough electricity to power a small city. The company specifically cited TPUs' "price-performance and efficiency" as key factors in the decision, along with "existing experience in training and serving its models with TPUs."
Industry analysts estimate that a commitment to access one million TPU chips, with associated infrastructure, networking, power, and cooling, likely represents a multi-year contract worth tens of billions of dollars<…
Konten dipersingkat otomatis.
🔗 Sumber: venturebeat.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!