red75prime 12 minutes ago
> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.
A self-driving car with a vision-language-action model inside buzzes by.
> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.
A large multimodal model listens to your request and produces a picture.
> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!
GPT‑5.3‑Codex helps debug its own training.
stephc_int13 3 minutes ago
AlphaZero was a special/unusual case, I would say an outlier.
FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.
I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.
dvt 6 minutes ago
No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.
mellosouls 30 minutes ago
Something big is happening (97 points, 77 comments)
AreShoesFeet000 17 minutes ago
irdc 12 minutes ago
twism 16 minutes ago
\ | /
--(_) --
.' . '.
/ . . \
| . |
\ . . /
'. . .'
'v'mchusma 30 minutes ago
But I have personally repeatedly used AI instead of humans across domains.
AI displacement isn’t a prediction. It’s here.