logo

Something Big Is (Not) Happening

Posted by DiscourseFan |3 hours ago |14 comments

red75prime 12 minutes ago[1 more]

The article feels, I don't know… maybe like someone calmly sitting in a rocking chair staring at the sea. Then the camera turns, and there's an erupting volcano in the background.

> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.

A self-driving car with a vision-language-action model inside buzzes by.

> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.

A large multimodal model listens to your request and produces a picture.

> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!

GPT‑5.3‑Codex helps debug its own training.

stephc_int13 3 minutes ago

The long tail is fatter and longer than many people expect.

AlphaZero was a special/unusual case, I would say an outlier.

FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.

I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.

dvt 6 minutes ago[1 more]

I think people are just getting lost in the sauce. Forget all the "singularity" or "AGI" nonsense. LLMs are genuinely useful automation machines. They're fantastic for going from semi-structured data to structured data. They're great for going from text blob to decision points. They're great for going from vague instructions to step-by-step inference.

No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.

mellosouls 30 minutes ago

This is a reference to the unaccountably viral article from a couple of days ago, discussed here:

Something big is happening (97 points, 77 comments)

https://news.ycombinator.com/item?id=46973011

AreShoesFeet000 17 minutes ago[1 more]

The mere idea that you could derive new correspondence to an emerging reality by rearranging fragments of the past is just insane to me.

irdc 12 minutes ago

Thus making humanity an ever-receding area of AI-incompetence.

twism 16 minutes ago

       \ | /
      --(_) --
    .'  .   '.
   /  .   .   \
   |    .     |
    \  .   . /
     '.  . .'
       'v'

mchusma 30 minutes ago[1 more]

These responses to AI seems to be from people who have not experienced what AI can do, and are therefore skeptical.

But I have personally repeatedly used AI instead of humans across domains.

AI displacement isn’t a prediction. It’s here.