MAROKO133 Eksklusif ai: Professor in Epstein Files Makes Extremely Awkward Announcement at

📌 MAROKO133 Breaking ai: Professor in Epstein Files Makes Extremely Awkward Announ

It’s been a rocky week for public intellectual Larry Summers.

On Tuesday, the 70-year-old Harvard economics professor gave an awkward announcement to his class of college students acknowledging his embarrassing ties to the deceased sex trafficker and billionaire financier Jeffrey Epstein, which were brought to light with the recent release of another batch of Epstein’s emails.

“Some of you will have seen my statement of regret expressing my shame with respect to what I did in communication with Mr. Epstein, and that I’ve said that I’m going to step back from public activities, but for a time,” Summers intoned, gravely. 

“But I think it’s very important to fulfill my teaching obligations. And so, with your permission,” he continues, not waiting for anyone’s objections, “I’m gonna — we’re gonna go forward and, uh, talk about the material, uh, in the class.”

Summers’ preeminence as an economic authority has made him a mainstay of politics for decades, serving as Bill Clinton’s treasury secretary from 1999 to 2001, and an economic advisor under Barack Obama from 2009 to 2011. He was also president of Harvard University for five years, but stepped down in 2006 after being criticized for making sexist remarks about women.

His connection to Epstein was once again brought under the microscope after a House committee released a trove of Epstein’s emails and documents last week, in which it became clear that Summers was far closer to the deceased sex criminal than he had previously let on. On Monday, Summers released a statement expressing how “deeply ashamed” he was for communicating with Epstein, the spirit of which he repeated in his address to his class.

In a 2018 email exchange, Summers, who is married with three children, asked Epstein for romantic advice related to a woman he said he was a mentor for — and who was decidedly not his wife — while lamenting that he wouldn’t be seen as anything more than that.

“Think for now I’m going nowhere with her except economics mentor,” Summers wrote to Epstein. 

Epstein, referring to himself as Summers’ “wing man,” assured that the woman was “doomed to be with you.”

These exchanges, as did many others, took place well after Epstein had pleaded guilty in 2008 to sexually abusing teenage girls as young as 14 years old, and continued up until his arrest in 2019 for child sex trafficking.

This isn’t the first time Summers’ ties to Epstein have been exposed. In 2023, the Wall Street Journal revealed that the pair met more than a dozen times between 2013 and 2016, during which Summers asked Epstein’s advice on raising $1 million for his wife’s poetry project. Epstein later chipped in $110,000, via a nonprofit.

Not much came of those revelations, but this time, it doesn’t seem Summers will be let off the hook quite so easily. On Wednesday, he announced he was stepping down from his position on OpenAI’s board of directors, conceding influence in what is, for better or worse, one of the most important companies in the world right now.

It also appears he won’t be allowed to finish teaching his class, as he told his students he would. Instead, according to his spokesperson, “co-teachers will complete the remaining three class sessions of the courses he has been teaching with them this semester, and he is not scheduled to teach next semester.”

More on Epstein: New Evidence Links Elon Musk to Epstein’s Island

The post Professor in Epstein Files Makes Extremely Awkward Announcement at Start of Class appeared first on Futurism.

đź”— Sumber: futurism.com


📌 MAROKO133 Breaking ai: Ai2’s Olmo 3 family challenges Qwen and Llama with effici

The Allen Institute for AI (Ai2) hopes to take advantage of an increased demand for customized models and enterprises seeking more transparency from AI models with its latest release.

Ai2 made the latest addition to its Olmo family of large language models available to organizations, continuing to focus on openness and customization. 

Olmo 3 has a longer context window, more reasoning traces and is better at coding than its previous iteration. This latest version, like the other Olmo releases, is open-sourced under the Apache 2.0 license. Enterprises will have complete transparency into and control over the training data and checkpointing. 

Ai2 will release three versions of Olmo 3:

  • Olmo 3- Think in both 7B and 32B are considered the flagship reasoning models for advanced research

  • Olmo 3- Base also in both parameters, which is ideal for programming, comprehension, math and long-context reasoning. Ai2 said this version is “ideal for continued pre-training or fine-tuning

  • Olmo 3-Instruct in 7B that is optimized for instruction following, multi-turn dialogue and tool use

The company said Olmo 3- Think is the “first-ever fully open 32B thinking model that generates explicit reasoning-chain-style content.” Olmo-3 Think also has a long context window of 65,000 tokens, perfect for longer-running agentic projects or reasoning over longer documents. 

Noah Smith, Ai2’s senior director of NLP research, told VentureBeat in an interview that many of its customers, from regulated enterprises to research institutions, want to use models that give them assurance about what went into the training. 

“The releases from our friends in the tech world are very cool and super exciting, but there are a lot of people for whom data privacy control over what goes into the model, how the models train and other constraints on how the model can be used as front of mind,” said Smith. 

Developers can access the models on Hugging Face and the Ai2 Playground. 

Transparency and customization

Smith said models like Olmo 3, which the company believes any organization using its models has to have control over and mold in the way that best works for them.

“We don't believe in one-size-fits-all solutions,” Smith said. It's a known thing in the world of machine learning that if you try and build a model that solves all the problems, it ends up not being really the best model for any one problem. There aren't formal proofs of that, but it's a thing that old timers like me have kind of observed.”

He added that models with the ability to specialize “are maybe not as flash as getting high scores on math exams” but offer more flexibility for enterprises.

Olmo 3 allows enterprises to essentially retrain the model by adding to the data mix it learns from. The idea is that businesses can bring in their proprietary sources to guide the model in answering specific company queries. To help enterprises during this process, Ai2 added checkpoints from every major training phase.

Demand for model customization has grown as enterprises that cannot build their own LLMs want to create company-specific or industry-focused models. Startups like Arcee have begun offering enterprise-focused, customizable small models.

Models like Olmo 3, Smith said, also give enterprises more confidence in the technology. Since Olmo 3 provides the training data, Smith said enterprises can trust that the model did not ingest anything it shouldn’t have.

Ai2 has always claimed to be committed to greater transparency, even launching a tool called OlmoTrace in April that can track a model’s output directly back to the original training data. The company releases open-sourced models and posts its code to repositories like GitHub for anyone to use.

Competitors like Google and OpenAI have faced criticism from developers over moves that hid raw reasoning tokens and chose to summarize reasoning, claiming that they now resort to “debugging blind” without transparency.

Ai2 pretrained Olmo 3 on the six-trillion-token open source dataset, Dolma 3. The dataset encompasses web data, scientific literature and code. Smith said they optimized Olmo 3 for code, compared to the focus on math for Olmo 2. 

How it stacks up

Ai2 claims that the Olmo 3 family of models represents a significant leap for truly open-source models, at least for open-source LLMs developed outside China. The base Olmo 3 model trained “with roughly 2.5x greater compute efficiency as measured by GPU-hours per token,” meaning it consumed less energy during pre-training and costs less.

The company said the Olmo 3 models outperformed other open models, such as Marin from Stanford, LLM360’s K2, and Apertus, though Ai2 did not provide figures for the benchmark testing.

“Of note, Olmo 3-Think (32B) is the strongest fully open reasoning model, narrowing the gap to the best open-weight models of similar scale, such as the Qwen 3-32B-Thinking series of models across our suite of reasoning benchmarks, all while being trained on 6x fewer tokens,” Ai2 said in a press release.

The company added that Olmo 3-Instruct performed better than Qwen 2.5, Gemma 3 and Llama 3.1.

 

đź”— Sumber: venturebeat.com


🤖 Catatan MAROKO133

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!

Author: timuna