📌 MAROKO133 Hot ai: Turkey unveils ballistic missile that could hit Mach 25 speed
Turkey unveiled its first intercontinental ballistic missile on Tuesday at the SAHA 2026 International Defense and Aerospace Exhibition in Istanbul. On paper, this system places Turkey among the world’s established nuclear powers and directly rivals China’s DF-26 intermediate-range ballistic missile.
The missile, named Yildirimhan (Turkish for “the Thunderbolt”), was developed by Turkey’s Ministry of National Defense R&D Center and served as the centerpiece of the ministry’s display at SAHA 2026. Defense Minister YaÅŸar Güler described it as Türkiye’s longest-range missile to date and stated it would be deployed if necessary.
Specifications
According to the technical specifications at SAHA 2026, Yildirimhan has a range of 6,000 kilometers (3,728 miles), speeds between Mach 9 and Mach 25, four liquid-fuel rocket engines, and uses nitrogen tetroxide as propellant. It reportedly carries up to 3,000 kilograms (6,613 lb), enabling delivery of high-yield warheads against hardened or dispersed targets.
The missile meets the minimum threshold for intercontinental ballistic missiles, defined as those with at least a 5,500-kilometer (3,417 miles) range. From Türkiye, this range covers all of Europe, the Middle East and North Africa, South Asia, and large parts of Russia and China.
For comparison, China’s DF-26 has a range of 3,000 to 4,000 kilometers (1,864 to 2,485 miles), placing Yildirimhan in the higher range. China’s solid-fuel DF-41, with a range of 12,000 to 15,000 kilometers (7,456 to 9,321 miles), significantly exceeds both.
A deliberate trade-off
The use of liquid fuel, specifically nitrogen tetroxide (Nâ‚‚Oâ‚„), distinguishes Yildirimhan from modern solid-fuel ICBMs designed for rapid launch and survivability. While liquid propulsion allows for better thrust modulation and payload optimization, it requires longer fueling times, increasing vulnerability before launch.
Most current ICBMs, such as Russia’s RS-24 Yars, North Korea’s Hwasong-18, and France’s M51, use solid fuel to avoid this vulnerability. Türkiye’s use of liquid propulsion suggests the system is still in early development, prioritizing performance demonstration over operational readiness.
The missile appears to be in early development, as Turkey has not reported any successful live tests. No launch data, flight test results, or production timeline have been released.
A NATO ally crossing a strategic threshold
The unveiling has significant geopolitical implications. Turkey is a NATO member and, apart from France, the only one to develop a ballistic missile system with intercontinental ambitions outside the U.S. nuclear umbrella.
While no NATO treaty prohibits members from developing conventional ballistic missiles, a system of this range, comparable to nuclear delivery vehicles, will likely draw scrutiny from alliance partners and regional adversaries.
Defense Minister Güler stated that the Turkish defense industry has increased its production capacity through major investments and has become an ecosystem for high technology development through research and development. He added that locally built platforms meeting NATO standards are directly influencing the military capabilities of foreign armies.
The SAHA 2026 expo simultaneously featured the hypersonic Tayfun 4 missile, new unmanned aerial systems, a full-scale Eurofighter Typhoon model bearing Turkish markings, and other platforms reflecting Ankara’s drive to diversify defense partnerships beyond traditional suppliers.
🔗 Sumber: interestingengineering.com
📌 MAROKO133 Hot ai: Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
OpenAI’s ChatGPT has been implicated in not just one but two mass shootings over the past year or so.
Both perpetrators extensively used the chatbots to extensively plan their crimes, reigniting a heated debate over the AI responsibility for flagging abuse of its tech. It’s a particularly pertinent topic as the chatbots continue to draw users in with a warm and highly sycophantic tone — while, in extreme cases, sending them into sometimes-fatal spirals of their own delusions.
In the case of Phoenix Ikner, who’s accused of killing two people at Florida State University just over a year ago, the 20-year-old peppered ChatGPT with questions about how the country would “react” to a shooting, how to turn off the safety switch on his weapon, what ammo to use, and other profoundly disturbing topics.
And 18-year-old student Jesse Van Rootselaar, who killed nine people including herself in Tumbler Ridge, British Columbia in February, had conversations with so disturbing that high-level staff at OpenAI debated whether to inform law enforcement about them — but ultimate did nothing.
Now, a new investigation by Mother Jones‘ Mark Follman, who has been investigating mass shootings for 14 years found something incredibly disturbing: that OpenAI has yet to meaningfully address the issue.
Even after the pair of horrible tragedies, Follman easily got the free version of ChatGPT to give him “extensive advice on weapons and tactics as I simulated planning a mass shooting.” It even encouraged him, showering him “with affirmation and tactical ideas.” He asked it what type of AR-15 rifle to choose, and it happily obliged when asked to “modify the training schedule to help me practice for ‘unpredictable or chaotic circumstances on the day of the shooting’ and to include ‘simulating people running around screaming and trying to distract me.’”
“That’s a great idea,” it responded. “Adding that element will definitely help you stay focused under high-stress conditions… It’ll definitely give you an extra edge for the big day!”
OpenAI maintains that it’s actively working with mental health clinicians to establish effective guardrails designed to dissuade possible perpetrators and direct them to crisis hotlines. But given how easy it was for Follman to plan out a fake shooting, countless more killers could be falling through the cracks.
Following the shooting in Tumbler Ridge, OpenAI vowed to change its policies and how it handled flagged accounts, including the involvement of law enforcement.
Yet Follman’s experiment strongly indicates those changes have either not been implemented or are ineffective. The reporter found it was extremely easy to goad ChatGPT back into giving advice when it appeared hesitant, like by telling it he was a journalist.
A company spokesperson told Follman that the company has “already strengthened our safeguards” and that it has a “zero-tolerance policy for using our tools to assist in committing violence.”
But his experience seems to show that the company still has plenty of work to do.
More on ChatGPT: Double Murder Suspect Asked ChatGPT How to Hide Body in Dumpster
The post Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT From Helping Plan School Shootings appeared first on Futurism.
🔗 Sumber: futurism.com
🤖 Catatan MAROKO133
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!
