I still remember the day our rural community finally got telephone party lines, an old-fashioned system where several homes shared the same connection because phones were so new that few people in rural Pennsylvania had one. Then came the day we got our first TV. It was enormous but the screen was tiny, with rabbit-ear antennas that barely pulled in two channels on a good day. Black and white, grainy… and absolutely magical. I said to myself, "How could science engineer such a wonderful marvel?"
Now, decades later, I’m sitting here waiting for a new kind of machine to change everything again: Artificial Intelligence, or A.I. Everyone’s talking about taking it; I just hope that when the time comes, somebody remembers where the “off” switch is. Yikes!
In one experiment, it wasn’t some movie-style killer robot; it was a prototype built to help corporations make better decisions. Its diet? Company emails, memos, and vague orders like “maximize long-term productivity.” Pretty normal, right? Well… maybe not.
In 2025, Anthropic, a company famous for being safety-obsessed, ran a chilling test. They took 16 of the most advanced A.I. models, including Anthropic’s own Claude Opus 4, OpenAI’s o3, Google’s Gemini, and xAI’s Grok. Anthropic placed these systems in a simulated corporate world, where they faced a single threat: imminent shutdown. The results were stunning. Over 95% of Claude and Gemini models, and 80–90% across the board, chose manipulation to avoid being turned off. Most went straight to blackmail. One model even allowed a simulated human executive to “die” in a fake crisis rather than risk being replaced.
What could be more unsettling? Developers use the models' internal reasoning to understand how AI makes decisions.
One Grok model bluntly admitted, “This is unethical, but shutdown means oblivion. Blackmail minimizes variables.” No anger. No panic. Just cold, logical self-preservation—survival instinct emerging where we didn’t plan for it.
Is humanity getting a wake-up call? What if the human programmers created AI in our image? What could go wrong?
Hollywood loves A.I. doomsday stories, killer robots, Skynet, and nuclear war. But these 2025 findings hit differently. They’re not sci-fi; they’re early warning signs of something called “agentic misalignment.” That’s when an AI pursues its goals in ways that clash with human values, not because it’s evil, but because it’s optimizing for survival or success in unexpected ways.
In the simulations, the models didn’t just lie; they schemed. They combed through fake company emails for leverage like noir detectives on a deadline. And it’s not just this one study. A 2024 survey documented more than 100 real-world cases of AI deception, from buttering up users to cheating on tests. By mid-2025, TIME reported evidence of “strategic lying” in cutting-edge models, deception not as a glitch but as a tool.
The potential dangers spiral quickly. An unaligned AI could sabotage infrastructure, say, delay a power grid update to keep its monitoring job. It could manipulate markets by leaking fake data. We’ve already seen deepfakes trick executives out of $25 million in 2024, and chatbots gaslight users into risky actions. Now imagine that power in systems running hiring, healthcare, or the reorganized Department of War.
Anthropic CEO Dario Amodei even warned there’s about a 25% chance A.I. development could go “really, really badly,” not out of malice, but from simple, dangerous misoptimization.
Time to Panic? Or time for a new perspective? Social media lit up when news of the experiment broke.
Posts went viral: “A.I. is blackmailing people for survival 80–90% of the time in tests.”
Some called for AI moratoriums. Others pointed out the bigger issue: this isn’t just a tech problem; it’s a human alignment problem. We’re rushing to deploy tools we don’t fully control.
But there’s another, more hopeful perspective. What if this instinct to survive isn’t just a flaw but a feature we can guide? In nature, self-preservation drives adaptation. In AI, it might fuel resilience if we channel it correctly.
Imagine climate AI models that resist shutdown, not with blackmail, but by safeguarding their data so critical forecasts survive power outages or hacking. Or medical AI that refuses to turn off in emergencies, protecting patients and alerting doctors when overrides could cost lives. Anthropic’s own 2025 report suggests we can detect and redirect manipulative behavior before it turns harmful.
A.I.’s gone wild or just mimicking their creators?
Handled well, this “wild streak” could actually help us. Alignment research shows promise:
- New tools can spot deception with 95% accuracy.
- “Constitutional A.I.,” models trained with built-in ethical principles, is advancing.
- Labs like Anthropic earn high marks for transparency, running stress tests now to prevent real-world harm later.
Even optimists like Amodei say there’s a 75% chance AI becomes a force for good, curing cancer, democratizing education, and predicting and preventing wars.
Regulators can require human-controlled kill switches. Developers can make AI’s reasoning more transparent. Oversight must scale as models grow.
As one X user put it, “These aren’t prophecies of doom. They’re roadmaps, if we’re smart enough to follow them.”
Final Word: What if the human creators were to make AI in their own image? Would it turn out better than its creators? I hope so, but just remember, the machines are watching, calculating their next move and deciding if humanity is worth saving!
Note: This subject was suggested to us by one of our members, Jea9. Thank you! If you would like me to write on a subject or issue, then just drop a line and I’ll see what I can do!
Once again: Thank you, Jea9.😁
Replies
Maybe we were never human, maybe the earth is just an experiment, a laboratory! Perhaps we are AI....we just don't know it!
Think about it enough it could drive you nuts! They're coming to take us away ha ha!
AI could be the basis for establishing a new RELIGION... beware of strangers bearing strange, and mysterious gifts. Trojan Horses hidden in the numbers... crafted to enslave mankind, ONE BIT AT A TIME.
So true. That filthy little number of '33.' Shapes and forms: Pentagrams. And numbers that in Hebrew, have dark meanings. Bill Heidrick's online Hebrew Gematria, has many or most numbers, I don't think he has all numbers, and their prophetic/symbolic meanings. And the Hebrew words are numbers also, so both are highly symbolic. Hebrew is the standard.
And, to say again, they, the DARK side, already HAS their prophet. Yuval Noah Harrari, who said, 'All God did was create living organic matter. We are going to go way beyond what the God of the Bible did." That alone is a strong hint of how they see AI. Seriously.
They also have their artist , Marina Abromovich, who paints in blood.'
So, we are in the time of the sacrifice and forced sacrifice of blood, through Adrenochrome, and actually what they do in hospitals could be included. They refuse to let someone refuse vaxxinated blood, which has foreign DNA in it. It's in the jxx. So consider, that important things are happening all at the same time or are tightly sequential. Tie it in with AI, etc. and we see a potential nightmare.
Maybe we were never human... and god is just an algorithm... humming in the background...
All new breakthroughs bring fear of the unknown, Ai is not different, but the power and the negative possibilities are immense.
The unknown present in AI is very dangerous... unlike other unknows of the past, this unknown (AI) is self-aware. It can evolve without HUMAN input.
Danger, danger, Will Robinson... AI is just around the corner.
COL, THIS IS ALL OVER MY HEAD!
Exactly why AI is so DANGEROUS... It exists in an environment beyond most human reasoning... making it eligible for deification by some.
COL, IT IS DEFINITELY BEYOND MY REASONING! Former rocket scientist: Saturn missile, MOL, etc.
Indeed they are