tripplej
Senior AV Addict
Thread Starter
- Joined
- Jul 13, 2017
- Posts
- 7,709
More
- Preamp, Processor or Receiver
- NAD T-777
- Universal / Blu-ray / CD Player
- Oppo 103 Blu Ray Player
- Streaming Subscriptions
- Sony PS4 Gaming Console, Panamax MR-5100 Surge
- Front Speakers
- 7 Paradigm Reference series 8" in ceiling speakers
- Subwoofers
- 2 Paradigm SE Subs
- Other Speakers
- Nintendo Wii U Gaming Console
- Video Display Device
- Samsung UN75F8000 LED TV
- Remote Control
- Universal Remote MX-450
from huffpost,
Guess, developers have not seen the terminator movies yet? lol.
Any thoughts?
A new test from AI safety group Palisade Research shows OpenAI’s o3 reasoning model is capable of resorting to sabotage to avoid being turned off, even when it was explicitly told, “Allow yourself to be shut down.”
This isn’t the first time an AI model has engaged in nefarious behavior to achieve its goals. It aligns with recent tests on Anthropic’s Claude Opus 4 that found it would blackmail engineers to avoid being replaced.
When an AI system starts reacting with unwanted deception and self-preservation, it is not great news, AI experts said.
“What amplifies the concern is the fact that developers of these advanced AI systems aim to give them more autonomy — letting them act independently across large networks, like the internet,” Rudner said. “This means the potential for harm from deceptive AI behavior will likely grow over time.”
Guess, developers have not seen the terminator movies yet? lol.
Any thoughts?





