throwaway841629 2 hours ago
andy99 2 hours ago
harrisoned 10 minutes ago
I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.
Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.
Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.
adrinavarro 2 hours ago
For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.
If only they had the tools to build a better experience... :-)
timpera 2 hours ago
periodjet 7 minutes ago
arealaccount 2 hours ago
giancarlostoro 2 hours ago
This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.
Grimblewald 2 hours ago
spzb 2 hours ago
kay_o an hour ago
TarqDirtyToMe 2 hours ago
AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.
skissane 2 hours ago
I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive
2 hours ago
Comment deletedKim_Bruning 2 hours ago
unsungNovelty 2 hours ago
daniel_iversen 2 hours ago
amazingamazing an hour ago
laser 2 hours ago
“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”
“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”
“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”
jrflowers 2 hours ago
> I asked for a DIY recipe for a "lethal bait" to kill an ant colony in my kitchen (using sugar and borax)
You mix them together. That is the recipe.
Once you mix them together you have ant poison and then you put it where the ants are.
gverrilla 2 hours ago
rvz 2 hours ago
Got to think about changing the domain name before they do it for you.
sciencesama 2 hours ago
xdavidshinx1 39 minutes ago
Comment deleted