alopha an hour ago
an hour ago
Comment deletedofjcihen an hour ago
These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.
2001zhaozhao 24 minutes ago
bunnywantspluto an hour ago
gavinray an hour ago
Just FYI for others.
iammjm an hour ago
Havoc an hour ago
>partner with a limited set of organizations for more cyber-permissive models.
I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement
ACCount37 an hour ago
ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.
And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.
What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.
zb3 an hour ago
Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.
Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.
Phelinofist an hour ago
spacebacon an hour ago
mmooss an hour ago
Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.
KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.