MAROKO133 Eksklusif ai: We Talked to a Writer Accused of Publishing An AI-Generated Essay

πŸ“Œ MAROKO133 Breaking ai: We Talked to a Writer Accused of Publishing An AI-Generat

Was AI used to produce a personal essay that wound up in the pages of the New York Times? The answer is complicated.

The writer Kate Gilgan found herself at the center of a literary scandal last month when, on social media, another writer accused her of using AI to write an emotional first-person essay about the experience of losing custody of her young son at the height of her alcoholism. The piece had been published in the NYT’s famously competitive “Modern Love” column back in October; the accusations were made without any hard evidence, and the writer who accused Gilgan of using AI, The Lit Mag’s Becky Tuch, pointed only to the style of Gilgan’s article as evidence. Others quickly piled on, and soon much of literary social media was swarming with speculation and analyses via AI content detectors (which, we should note, are known to be unreliable.)

Gilgan is pretty offline, she told Futurism β€” so it wasn’t until journalists started asking her about the controversy that she realized there was one at all.

“I’m actually not on Twitter or X or whatever that is,” said Gilgan, who spoke to us from her home in the Western Canadian province of Saskatchewan. But she “wasn’t that worried,” she said, “because AI wasn’t used to generate that content.”

That contention, it turns out, is a bit semantic. As Gilgan conceded to The Atlantic, she did make use of a variety of chatbots β€” ChatGPT, Claude, Copilot, and Perplexity β€” for conceptualizing and editing the piece, though she denied copying and pasting anything directly from an AI into her essay.

The situation, in other words, is messy. Though the AI accusations against her were unsubstantiated at first β€” they were based simply on certain rhetorical devices that chatbot-generated writing is known to favor, and which the public is clearly starting to be on the lookout for β€” it turned out that readers were right to be suspicious, since AI did have a prominent hand in the creation of the piece.

The controversy comes at an intensifying moment for the literary world’s ongoing struggle with AI. Institutional scandals continue to abound β€” within the same two-week span as the allegations against Gilgan emerged, the publishing giant Hachette pulled a buzzy new horror novel over suspicion of substantial AI use, and the NYT cut all ties with a book critic after it was discovered that his usage of AI had resulted in the newspaper publishing a significantly plagiarized book review β€” while some writers and journalists are starting to open up about their sometimes very extensive use of AI.

To unpack it all, I wanted to talk to Gilgan myself β€” about how she used AI, what it means when a machine becomes a collaborator in the creative process, and where writers should draw the line.

In an interview, Gilgan maintained that the idea that she published AI slop in “Modern Love” is false. But she did use chatbots to help her craft a piece specifically for publication in the column, and there’s no question that it ended up with the distinctive argot of AI. One thing was clear: AI use has turned into one of the most contentious topics in the literary community.

“I was going back and reading a lot of my earlier pieces β€” I guess, maybe intuitively, I was wondering, ‘Oh, my God, has that happened? Has AI changed my voice?’” Gilgan told me. But “I don’t think I actually worried about it, because I haven’t used it to that extent.”

***

Gilgan started taking getting published seriously about ten years ago, she told us, writing about extremely personal topics like an extramarital affair she’d had and her family’s experience of being trapped in Bali during the pandemic. And even before that, about 15 years ago, she tried β€” and failed β€” to write a memoir about the same experience she later explored in her “Modern Love” piece: losing custody of her young son due to alcoholism.

The problem? It wasn’t any good, she said.

“It was so full of self-pity and histrionic emotional grandeur; it was just awful,” said Gilgan. “And so I stopped writing it and set it down… it just wasn’t working.”

A few years ago, she decided she wanted to revisit the custody battle and her subsequent path to sobriety, but this time as a novel.

“It gave me more freedom,” said Gilgan. She finally finished her first draft about a year ago; the non-fiction essay published in “Modern Love,” she says, was born from that.

“This essay then came out of that novel,” Gilgan said. Distilling it into a shorter essay, she thought, might help her get her book published. “I thought, ‘Okay, I’m going to try and leverage this. I’m going to try and market the essay to try and help bring my book to publication.’”

Gilgan was strategic. She turned to chatbots, which she says she started playing around with about two or so years ago, to help her craft her essay in a way that she believed would appeal to the NYT‘s “Modern Love” editorial staff.

“Rather than sitting on Google reading through tons of other people’s articles about how to get published in ‘Modern Love’ and ‘here’s what Dan Jones looks for,’” said Gilgan, referring to the column’s longtime editor, “I asked AI, ‘Okay, boil this down for me. Take everything β€” every scrap of information on the internet that you can find β€” to help me get this essay published in the Times.’”

Gilgan u…

Konten dipersingkat otomatis.

πŸ”— Sumber: futurism.com


πŸ“Œ MAROKO133 Update ai: Student Dies When Hospital Has No ICU Doctors, Calls One on

The parents of a 26-year-old dental student named Conor Hylton are suing a Connecticut hospital after their son died in its “telehealth” intensive care unit where no critical care doctors were actually present, they allege in the lawsuit.

According to the wrongful death complaint filed against Yale New Haven Health, the largest healthcare provider in the state, Hylton visited the emergency room at its Bridgeport Hospital Milford Campus because of abdominal pain and vomiting on the morning of August 14, 2024. When his condition worsened, he was admitted to the hospital ICU and diagnosed with pancreatitis, dehydration, metabolic acidosis, and alcohol withdrawal, per a medical analysis cited in the suit.

Rather than receiving traditional care, however, Hylton was unwittingly plunged into a cold experiment in using remote work to offset hospital staffing shortages, which could be a grim portent in an age of AI automation. During the late hours he was admitted to the ICU, there were no on-hand ICU intensivists β€” the term for doctors that specialize in providing critical care β€” the suit alleges. Instead, the wing outsourced this to a “tele-ICU” service, which relies on off-site intensivists.

No on-site physician assessed Hylton for hours, despite his rapidly deteriorating condition. A hospitalist β€” a doctor that provides general medical care for in-patients but doesn’t specialize in critical care β€” was assigned to Hylton, but allegedly never saw him. 

In the early morning after he was admitted at around 4:30 AM, Hylton became unresponsive. He “slid down in bed, his eyes rolled back and he… exhibited seizure-like activity, vomited, became bradycardic and code was called,” the complaint alleges, as reported by Law & Crime. “He was intubated, but he could not be resuscitated, and he was pronounced dead.”

The pronouncement, according to the suit, was done by a “tele-health” provider on a video screen. 

The family, meanwhile, was never notified about Hylton’s condition, they claim. If they had the chance to have a say, they never would’ve allowed their son to go into a “tele-ICU.”

“It’s a fake ICU,” the family’s attorney Joel Faxon told CT Insider in a recent interview. “It’s not real because no patient would ever consent if they told… they’re not going to have a doctor in here. They’re going to be on the tube.”

In the lawsuit, the parents argue the ICU “violated hospital policy because no on-site doctor assessed Mr. Hylton from the time he was admitted to the ICU until after he exhibited seizure-like activity.” The ICU never provided bed-side monitoring, nor assessments for his pain levels and other basic medical measures that could’ve been taken by a doctor. It in particular points to an alleged failure to protect Hylton’s airways as he was being administered powerful sedatives, CT Insider noted, which may have contributed to his death.Β 

The lawsuit follows an investigation from the Connecticut Department of Public Health that concluded that the hospital failed to ensure quality medical care was provided” to Hylton, Law & Crime noted.

More on: America’s Largest Hospital System Ready to Start Replacing Radiologists With AI, Its CEO Says

The post Student Dies When Hospital Has No ICU Doctors, Calls One on Videochat Who Pronounces Him Dead Remotely, Lawsuit Claims appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ€– Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!

Author: timuna