MAROKO133 Update ai: Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT From He

📌 MAROKO133 Hot ai: Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

OpenAI’s ChatGPT has been implicated in not just one but two mass shootings over the past year or so.

Both perpetrators extensively used the chatbots to extensively plan their crimes, reigniting a heated debate over the AI responsibility for flagging abuse of its tech. It’s a particularly pertinent topic as the chatbots continue to draw users in with a warm and highly sycophantic tone — while, in extreme cases, sending them into sometimes-fatal spirals of their own delusions.

In the case of Phoenix Ikner, who’s accused of killing two people at Florida State University just over a year ago, the 20-year-old peppered ChatGPT with questions about how the country would “react” to a shooting, how to turn off the safety switch on his weapon, what ammo to use, and other profoundly disturbing topics.

And 18-year-old student Jesse Van Rootselaar, who killed nine people including herself in Tumbler Ridge, British Columbia in February, had conversations with so disturbing that high-level staff at OpenAI debated whether to inform law enforcement about them — but ultimate did nothing.

Now, a new investigation by Mother Jones‘ Mark Follman, who has been investigating mass shootings for 14 years found something incredibly disturbing: that OpenAI has yet to meaningfully address the issue.

Even after the pair of horrible tragedies, Follman easily got the free version of ChatGPT to give him “extensive advice on weapons and tactics as I simulated planning a mass shooting.” It even encouraged him, showering him “with affirmation and tactical ideas.” He asked it what type of AR-15 rifle to choose, and it happily obliged when asked to “modify the training schedule to help me practice for ‘unpredictable or chaotic circumstances on the day of the shooting’ and to include ‘simulating people running around screaming and trying to distract me.’”

“That’s a great idea,” it responded. “Adding that element will definitely help you stay focused under high-stress conditions… It’ll definitely give you an extra edge for the big day!”

OpenAI maintains that it’s actively working with mental health clinicians to establish effective guardrails designed to dissuade possible perpetrators and direct them to crisis hotlines. But given how easy it was for Follman to plan out a fake shooting, countless more killers could be falling through the cracks.

Following the shooting in Tumbler Ridge, OpenAI vowed to change its policies and how it handled flagged accounts, including the involvement of law enforcement.

Yet Follman’s experiment strongly indicates those changes have either not been implemented or are ineffective. The reporter found it was extremely easy to goad ChatGPT back into giving advice when it appeared hesitant, like by telling it he was a journalist.

A company spokesperson told Follman that the company has “already strengthened our safeguards” and that it has a “zero-tolerance policy for using our tools to assist in committing violence.”

But his experience seems to show that the company still has plenty of work to do.

More on ChatGPT: Double Murder Suspect Asked ChatGPT How to Hide Body in Dumpster

The post Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT From Helping Plan School Shootings appeared first on Futurism.

🔗 Sumber: futurism.com


📌 MAROKO133 Update ai: Passengers Groan as Robot Passenger Causes Hour-Long Delay

Humanoid robots still struggle to accomplish real-world tasks with human-level dexterity. At the airport, though, they’ve already proven perfectly capable of matching human incompetence.

A wild story out of Oakland, California shows what happens when you try to force a humanoid robot to fly coach. During a trip from Oakland to San Diego, a team of workers with the rental company Elite Event Robotics was traveling with Bebop, a 77-pound robot that appears to be a Unitree G1.

To get from Oakland to San Diego is a 7.5 hour drive, so it can make sense to fly. Yet when the team rolled up with Bebop in toe, Southwest told them they couldn’t check the robot as luggage due to weight restrictions. Failing that, they bought the bot its own seat on the plane, and that’s where the trouble began.

As the robot’s handlers tell it, flight attendants were — rightfully — concerned with how well the robot would behave on the flight.

“They come and start asking, ‘what kind of batteries does it have? What’s going on with this? X, Y, and Z.’ They want to see it,” Elite Event Robotics staffer Ben-Abraham told local media. “Meanwhile, I’m watching his flight, and I keep seeing online: ‘runway delay.’”

Apparently, that back-and-forth lasted over an hour, during which the plane was stuck idling on the runway. In the end, Southwest confiscated Bebop’s lithium battery, arguing that it broke the airline’s size limit.

It’s likely the first flight delay caused by a humanoid robot — a dubious distinction for any startup.

More on robotics: New AI-Powered Robot Can Destroy Human Champions at Ping Pong

The post Passengers Groan as Robot Passenger Causes Hour-Long Delay at Oakland Airport appeared first on Futurism.

🔗 Sumber: futurism.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna