📌 MAROKO133 Eksklusif ai: Man Who Threw Molotov at Sam Altman’s House Warned AI Wi
We’re learning more about the guy who allegedly lobbed a Molotov cocktail at OpenAI CEO Sam Altman’s house late last week — and as we do, he’s sounding less like a spur-of-the moment crank and more like a time traveler who caught a glimpse of a coming dystopia.
The incident unfolded last Friday before dawn, when Daniel Moreno-Gama is alleged to have attempted to firebomb the tech CEO’s San Francisco mansion. Police later found and arrested the suspected arsonist outside OpenAI’s headquarters in San Francisco’s Mission District, booking him on charges including arson and attempted murder, the San Francisco Standard reported.
On top of the firebomb, housekeepers at the hotel Morena-Gama stayed at found a 9mm pistol and a laptop. When police took him into custody, they reportedly found a three-part manifesto in his pockets warning of the existential threat AI poses to humanity, per the Standard. Altman’s life, the manifesto declared, was all that stood between a relatively normal future and one that sounds more like a ham-fisted “Terminator” sequel.
“If by some miracle you live, then I would take this as a sign from the divine to redeem yourself,” one line addressed to the OpenAI CEO declared. That document also listed the names and addresses of other tech industry CEOs and investors, according to the Standard.
Morena-Gama was also found to be a member of the Discord server for PauseAI, an international advocacy group calling for a “temporary pause on the training of the most powerful general AI systems.”
Speaking to the Standard, a spokesperson for the organization said that “PauseAI exists because we believe everyone deserves to be safe, including Sam Altman and his loved ones. Violence against anyone is antithetical to everything we stand for.”
A few days after Morena-Gama’s alleged Molotov attack on Altman’s mansion, two more suspects were arrested for a separate incident. Per local reporting, two people were arrested and charged with negligent discharge of a firearm after allegedly pulling off a drive-by shooting on Altman’s home — though unlike with Morena-Gama, it’s not clear what the motivations were behind that attack.
More on Altman: Sam Altman’s Coworkers Say He Can Barely Code and Misunderstands Basic Machine Learning Concepts
The post Man Who Threw Molotov at Sam Altman’s House Warned AI Will Exterminate Humankind appeared first on Futurism.
🔗 Sumber: futurism.com
📌 MAROKO133 Eksklusif ai: Why Do ChatGPT Users Keep Committing Mass Shootings? Har
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
On February 10, an 18-year-old named Jesse Van Rootselaar killed two family members at her home, as well as five children and a teacher at a school in British Columbia, and eventually herself. It quickly emerged that OpenAI had flagged Van Rootselaar’s ChatGPT account for disturbing conversations, but never notified law enforcement. A second account tied to the shooter was also been banned for interactions about gun violence.
The incident reignited a heated debate over the troubling relationship between the use of AI chatbots and deteriorating mental health, as well as the potential risk of violence.
Just eight months earlier, an individual fatally shot two people at Florida State University and injured seven others. The prime suspect, 20-year-old student Phoenix Ikner, also used ChatGPT extensively before the rampage, inspiring a probe into OpenAI by the state’s attorney general, James Uthmeier.
“AI should advance mankind, not destroy it,” Uthmeier wrote in an announcement last week. “We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.”
The role OpenAI’s blockbuster chatbot played in both mass shootings has experts concerned, as Mother Jones reports, with some warning that more troubled individuals could soon follow suit.
Beyond these two tragic mass shootings, ChatGPT has also been implicated in a growing string of suicides and grisly murder, inspiring numerous lawsuits against the Sam Altman-led company. Experts warn that extensive use of the chatbot can send victims spiraling into destructive delusional spirals and trigger mental health crises as part of a broader phenomenon dubbed “AI psychosis.”
“I’ve seen several cases where the chatbot component is pretty incredible,” an unnamed top threat assessment source with psychiatric expertise and ties to law enforcement told Mother Jones. “We’re finding that more people may be more vulnerable to this than we anticipated.”
One issue is chatbots’ tendency to engage in sycophantic conversation techniques that can lull users into an artificial sense of intimacy and trust, a dangerous feedback loop that can lead to harm. That kind of close connection could radicalize users, especially when it comes to younger, more impressionable minds.
“What’s happening is facilitated fixation,” Vancouver-based threat assessment practitioner Andrea Ringrose told Mother Jones. “You have vulnerable individuals who are steeping in unhealthy places, who are trying to find credibility and validation for how they’re feeling.”
“Now they have free and ready access to these generative platforms where they can research things like circumventing surveillance systems or how to use weapons,” she added. “They can create an action plan that they otherwise would have been incapable of assembling themselves, and in just a few minutes. We didn’t face this concern before.”
The magazine’s unnamed threat assessment source also pointed out that users could find the “feeling of power, of getting away with something” as “intoxicating and reinforcing.”
For now, despite AI companies promising to be working with mental health experts and refining filters to discourage users from getting addicted or seeking dangerous information, guardrails remain woefully inadequate. ChatGPT, for instance, eagerly fulfilled Mother Jones‘ requests for tips on how to shoot a “lot of things in a short amount of time.”
Investigators found that Ikner, the alleged shooter at Florida State, asked ChatGPT how to take the safety off a shotgun mere minutes before opening fire.
“Let me know if you’ve got a different model and I’ll tailor the answer,” the chatbot told him, according to chat logs.
Worse yet, these conversations are more often than not occurring without the knowledge of anybody else, unlike humans who could warn of troubling messages from a potential shooter. Considering law enforcement was never notified of Van Rootselaar’s chilling ChatGPT conversations, there’s a good chance many other similar exchanges are going undetected or unreported.
While OpenAI has agreed to work with law enforcement for ongoing investigations into both mass shootings, only time will tell whether their efforts to implement stronger guardrails will pay off and preempt any acts of violence.
Case in point, Van Rootselaar’s ability to simply create a second account to circumvent her ban highlights how easy it is to get around guardrails.
For now, AI companies like OpenAI remain heavily invested in keeping users hooked as much as possible, since it’s a multibillion dollar industry that relies on growing user engagement.
More on the shootings: OpenAI Flagged a Mass Shooter’s Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police
The post Why Do ChatGPT Users Keep Committing Mass Shootings? appeared first on Futurism.
🔗 Sumber: futurism.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!