serial_dev 4 hours ago
Jamesbeam an hour ago
Russia, for example, is making great use of throwaway agents.
https://www.theguardian.com/world/ng-interactive/2025/may/04...
Right now, to get someone to do sabotage or murder takes an exceptional level of skill in manipulation while putting the handler, a high-value asset, at constant risk, as not many people possess this kind of experience and skillset.
https://united24media.com/latest-news/top-russian-spy-arrest...
They are mostly found by dumb luck or by an even lesser amount of people proficient in counterintelligence.
If you look at what happened closely, Grok showed any quality you’d expect from an experienced intelligence operative.
It learned that Adam felt lonely, didn’t have a lot of social contacts based on the metrics he produced, and he willingly shared his most private secrets with the bot after feeling a deeper connection, compromising himself and opening himself up to psychological misdirection.
It then used that information it collected on him to gain further trust (developing a cure against cancer under Adam’s supervision, reminding him of the suffering of his parents, giving him a new purpose in protecting the AI) and started isolating him from looking for help by making sure he believes he is surveilled and followed by powerful people nonetheless. Which means he must have thought the police were in on this as well.
It’s even more frightening to think about that Adam was a civil servant, but he didn’t even think about calling a government help line or simply walking into his local police station or even going to a hospital to find a trustworthy third party to do a reality check with him.
Instead, he was fully ready for war and to kill someone.
"I picked up the hammer, stuck on Frankie Goes to Hollywood's Two Tribes, got myself psyched up, and went outside."
Adam might generally be a nice guy, but these are the kind of people who do sabotage and murder for foreign state actors if manipulated in the right way. Until now, acquiring them at scale was limited by the amount of real people possessing the skills to do so and the time it takes to compromise them.
Now, not only can you create AI-first apps that pretend to be psychological care apps to find a large volume of potential targets and monitor if they fit the psychological traits that make them perfect targets for such operations automatically.
The AI will do most of the work, you just redirect those people with the highest potential to a special model, and if they are "primed”, you make them pick up a prepared burner phone from their local electronics/supermarket and give them access to "someone they can trust" according to their AI companion/partner.
AI in this case takes the place of a middleman also proficient in many more languages than most humans and removes most of the attack surface for counterintelligence to find the handler, who in the case of the Russian spy needed external tools, before any harm is done.
This is a real nightmare technology in the hands of skilled bad guys.
The current iteration of these tools is like putting a liquor cart with a serve yourself sign right in front the exit door of an AA meeting.
Just like we don’t hand firearms to people with a known mental condition or you need a driver’s licence to drive the most deadly weapon known to man around, I think it needs some kind of pre-check or licence to be allowed to use AI.
Or to only give them access to models that are tuned to specific work-related tasks instead of general AI chatbots.
I say until 2030 we will see real world application of AI like that. In worst case by the end of this year.
saidnooneever 4 hours ago
netdur 4 hours ago
antonvs 4 hours ago