cortesoft an hour ago
Give it a few months and it will be just another model they are selling, but the NEWER model is just too powerful for the public.
randallsquared an hour ago
The stated purpose of Glasswing is to give infra and security orgs the chance to close holes and improve their security. In that context, it seems odd to call for not providing them access with the justification being that they have security breaches sometimes.
Mythos may or may not itself be opened to the public at some point, but I would charitably expect that Anthropic plans that a future model at least as good as Mythos Preview will be, and the limited release for Mythos is intended to make that eventuality safer by having most of the existing holes patched.
wyc an hour ago
> A 16-year-old with no credentials and no capital could just do things. The world of bits offered the freedom to build without being drowned in arbitrary constraints, in a way that didn’t require assembling vast capital or prestige or connections, where your creativity and work could speak for itself, and you had agency.
This is now truer than it ever was.tim-projects 41 minutes ago
Much better than hiding it away where it can't help anyone.
OldGreenYodaGPT an hour ago
fwipsy an hour ago
So, opportunity for individuals comes from disruption. Creative destruction is good up to a point, but it results from advancing capabilities. Technological advances compound and accelerate exponentially. Eventually we reach the point where any malcontent can destroy the world by snapping their fingers. At some point we need to place restrictions on the capabilities accessible to individuals. We have reached that point with nuclear weapons, and I think it is sensible to believe that AI is reaching that point as well.
derektank an hour ago
> You can generate your own electricity with a solar panel (think local models), but most people would rather pay a utility bill. And the power company doesn’t decide, on the basis of pedigree, who is worthy of electricity. Intelligence should work similarly, where the capabilities you can access scale may scale with vetting and due process, but the presumption should be access. Add safety guardrails to restrict dangerous use; start by making them overly trigger-happy if you must, and calibrate over time. But the default should be to allow entry.
hn_throwaway_99 an hour ago
9 minutes ago
Comment deletedskybrian an hour ago
I'm wondering what other security-sensitive software that might become true of in the era of Mythos-or-better AI's?
There will still be open source projects that anyone could learn enough to contribute to, but maybe starting from scratch and writing your own becomes less feasible if you aren't attracting enough attention to get attention from people with access to the best AI's?
For example, Linux patches are going to get expert reviews, but maybe your homegrown OS won't?
zkmon an hour ago
laurentiurad 40 minutes ago
pizzly an hour ago
I would like to see more countries capable of producing frontier models. At the moment we have two in the world but many countries are building their own national models and AI infrastructure and may join the race.
Having a multipolar world may actually result in more freedom in gaining access to frontier models.
furyofantares 21 minutes ago
hgoel an hour ago
It'd only take one company deciding to not worry about safety, to change the calculus back to "we have to release this to stay competitive".
atleastoptimal an hour ago
1. AI models are becoming better and better at causing massively disruptive effects, leaving up larger and larger liabilities, especially as laws and regulations are being passed/proposed which would put the responsibility of some mass disruption/hacking event on the company which serves the model that made it possible
2. The relative advantage of serving an AI model for inference in exchange for money is waning compared to the advantage of using that model internally for purposes which accrue money/power/leverage for that AI company. Why serve a model at 30 dollars/million tokens when you've discovered you can use that model to run a simulated Quant firm with a net profit of 300 dollars/million tokens? Why offer the model to companies so they can find zero-day exploits, when you can find them yourself and sell the discovery to companies which would may millions to avoid this exploit being taken advantage of?
3. Why serve models so another wrapper company like Cursor can make billions off your tokens, and then try to train their own models as fast as possible, trained on your outputs so they aren't dependent on you? The entire AI startup industry and like 90% of YC batches depend on being able to serve frontier models at a profit, mediated through some wrapper, why can't OpenAI/ANthropic, once their models are good enough to handle the ideation/organizational problem, become their own incubator for thousands of AI run startups, running on models way better than the public has access to?
As a consequence, there is less and less incentive over time to offer models as an API to the public.
oa335 an hour ago
psychoslave 26 minutes ago
AIorNot 11 minutes ago
OMG this generation - we can't separete the outrage from reality anymore.
Meanwhile 3000 people have died arbitrarily in Iran War -while we navel gaze.
camillomiller 23 minutes ago
rdedev an hour ago
Are any AI labs claiming this?
operatingthetan an hour ago
For example, the people who Anthropic "trusts" with this "dangerous" model are a handful of fortune 500 companies? Seriously? Those are the people we trust?
We are going to have access to this within 6 months, and if we don't, someone else will offer an equivalent. Anthropic hasn't walked to the edge of the abyss only to be like "let the CEO's handle this!"
It is simply not the edge of the abyss.
willworktill4pm an hour ago
gxs 30 minutes ago
Their intensions were good, they always are, but the minute you decide to nerf something powerful for someone, it means someone out there has access to the full blown, unnerfed version
Which means there are powerful people out there using AI in ways or for activities in which you will never be allowed to anyway
So yeah, this is just more of the same
lowbloodsugar an hour ago
simianwords an hour ago
What if this new model can start proving Millenial problems and provide insights in other fields that was not possible before?
My intuition says that a model that is as good will also be equally well aligned -- but it is still highly risky to give it to the general public because all you need is one jailbreak from bad actors.
At that point I think society would change so dramatically that "access to general public" would be a non issue. Rather, time would be spent on making abundance happen - you might think of the political struggles, economics and new ventures.
Its a bit sad that democratised access is not provided because of negative sum possibilities like cybersecurity.
cyanydeez an hour ago
shevy-java an hour ago
That dream was always a lie. But in the past, people could purchase more in parity. You only need to look at income versus housing cost in, say, Canada.
Realistically there should not exist any superrich, but this seems hard to change. That means there needs to be a different society be given as promise. Other countries manage that. In the USA they have the orange oligarch who said a while ago how there is no money for health care because he has to invade countries and wage war. So much for the "no more wars" promise.
keybored an hour ago
The Internet was developed by the US state sector and handed off to the private sector in the 90’s. Then it worked as an open space until it didn’t any more. Predictably driven by corporate interests.
> In 1893, Frederick Jackson Turner argued that much that is distinctive about America was shaped by the existence of free land to the West where anyone could start over, and that this condition infused America with its characteristic liberty, egalitarianism, rejection of feudalistic hierarchy, self-sufficiency, and ambition.
A more asinine comparison could not have been picked.