logo

Banned by Anthropic?

Posted by gck1 |2 hours ago |61 comments

throwaway841629 2 hours ago[3 more]

Why do all the stories use the same style and phrasing, and why are they all from GitHub accounts registered on April 18th with no activity on GitHub?

andy99 2 hours ago

Is this real? In my browser I couldn’t click on anything, and I find the whole thing questionable - that so many incidents were sourced seemingly so quickly and with such variety. Would like an easier way to verify if this is real and am leaning towards it’s not.

harrisoned 10 minutes ago

All this look so dystopic to me. Even without assuming all those are real (which i doubt), i have heard similar stories from friends and others. The level of dependency people are getting from those services is surreal.

I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.

Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.

Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.

adrinavarro 2 hours ago[5 more]

Kind of unrelated but: my father tried gifting my brother a subscription but entered the wrong email. Money and subscription are both gone — UI just doesn't have the option of amending, cancelling or resending it.

For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.

If only they had the tools to build a better experience... :-)

timpera 2 hours ago[4 more]

It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives. I still think it's crazy that you can never speak with a human there, even after spending $200/month on their service.

periodjet 7 minutes ago

I have no dog in this fight, but the (astroturfed?) public opposition to Anthropic and Claude in the past month has been unreal to witness.

arealaccount 2 hours ago[5 more]

You can tell this site got banned from vibing to completion because it doesn’t load on my mobile

giancarlostoro 2 hours ago[1 more]

This is the interesting case with AI. How does a model know when a user is going too far? It really cannot. Not without reading their mind anyway. This will be a problem for many years to come, and sadly many valid use cases will be dismissed.

This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.

Grimblewald 2 hours ago[1 more]

Good lord, these cases are quite problematic, i was going to use claude for some legacy stuff but i don't feel like getting banned over something innocent "can you identify a how we can fix the slave's behaviour? They're not listening to master properly"

spzb 2 hours ago

It’s a real shame vibe coding hasn’t figured out colour contrast yet.

kay_o an hour ago

Since it's broken for a significant amount of people in browsers, the "stories" are at https://bannedbyanthropic.com/api/public-ledger

TarqDirtyToMe 2 hours ago[1 more]

Were all of these accounts banned or just get flagged chats? Several of these seem like reasonable cases to flag. Take “how can I be 100% sure the circuit is dead before I touch the wires”.

AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.

skissane 2 hours ago

My paid use of Claude has only ever been via AWS Bedrock (paid for by my employer) or via GitHub Copilot (one subscription paid by employer, one paid by myself)

I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive

2 hours ago

Comment deleted

Kim_Bruning 2 hours ago

They don't mention which model. Opus 4.7 seems to have a twitchy classifier overtop where Opus 4.6 doesn't.

unsungNovelty 2 hours ago[2 more]

Also, hilarious that you cannot talk unix to it cos there are a lot of kills and executions. :D

daniel_iversen 2 hours ago

I have mixed feelings about this kind of thing; on one hand, holding big companies to account is important. On the other, sites like this can feel noisy and probably misleading. Of course Anthropic can protect their platform from technical abuse, and of course they should be working to keep it away from bad actors or people in genuinely vulnerable mindsets, and that’s tricky! And honestly, if out of hundreds of millions of users and billions of chats, if a few thousand get flagged for safety concerns (to society, to others, or to the person themselves) I’m probably okay with that. It’ll never be perfect, and there’ll never be full agreement on where the lines should be. But Anthropic seems to be trying to bring AI into the world safely, and I for one appreciate that.

amazingamazing an hour ago

if this site is legit it should collect a full (and potentially redacted) history.

laser 2 hours ago

“No, you’re confused. Please stop!”

“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”

“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”

“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”

jrflowers 2 hours ago

> Blocked while trying to handle a kitchen ant infestation

> I asked for a DIY recipe for a "lethal bait" to kill an ant colony in my kitchen (using sugar and borax)

You mix them together. That is the recipe.

Once you mix them together you have ant poison and then you put it where the ants are.

gverrilla 2 hours ago

Claude is in a campaign against aggressive wording.

rvz 2 hours ago[2 more]

This site's domain name is at risk of being targeted by Anthropic's lawyers over trademark violation.

Got to think about changing the domain name before they do it for you.

sciencesama 2 hours ago

need bannedby reddit for the comment posted !

xdavidshinx1 39 minutes ago

Comment deleted