logo

AI safety leader says 'world is in peril' and quits to study poetry

Posted by darod |2 hours ago |50 comments

codingdave an hour ago[5 more]

I recommend reading the letter. Many of the comments here seem to have missed that the comment of "the world is in peril" is not referring to AI, but to the larger collection of crises going on in the world. It sounds to me like someone who realized their work doesn't match their goals for their own life, and is taking action.

Maybe the cynics have a point that it is an easier decision to make when you are loaded with money. But that is how life goes - the closer you get to having the funds to not have to work, the more you can afford the luxury of being selective in what you do.

gravy an hour ago[7 more]

Seems to be the MO around here - create and profit off of horrors beyond our wildest imaginations with no accountability and conveniently disappear before shit hits the fan. Not before writing an op-ed though.

CrimsonCape an hour ago

> his contributions included investigating why generative AI systems suck up to users

Why does it take research to figure this out? Possibly the greatest unspoken problem with big-coporate-AI is that we can't run prompts without the input already pre-poisoned by the house-prompt.

We can't lead the LLM into emergent territory when the chatbot is pre-engineered to be the human equivalent of a McDonalds order menu.

atomic128 an hour ago[1 more]

A recent, less ambiguous warning from insiders who are seeing the same thing:

  Alarmed by what companies are building with artificial
  intelligence models, a handful of industry insiders are
  calling for those opposed to the current state of affairs
  to undertake a mass data poisoning effort to undermine the
  technology.

  "Hinton has clearly stated the danger but we can see he is
  correct and the situation is escalating in a way the
  public is not generally aware of," our source said, noting
  that the group has grown concerned because "we see what
  our customers are building."
https://www.theregister.com/2026/01/11/industry_insiders_see...

And a less charitable, less informed, less accurate take from a bozo at Forbes:

  The Luddites are back, wrecking technology in a quixotic
  effort to stop progress. This time, though, it’s not angry   
  textile workers destroying mechanized looms, but a shadowy
  group of technologists who want to stop the progress of
  artificial intelligence.
https://www.forbes.com/sites/craigsmith/2026/01/21/poison-fo...

spondyl an hour ago

This has been discussed previously: https://news.ycombinator.com/item?id=46972496

Personally, I agree with the top comment there.

If you read the actual letter, it's very vague and uses a lot of flowery language.

Definitely not the sort of thing that raised alarm bells in my mind given how the letter was written.

https://x.com/MrinankSharma/status/2020881722003583421

layer8 42 minutes ago

Since nobody seems to be reading the actual letter, here’s an OCR of it: https://pastebin.com/raw/rVtkPbNy

23 minutes ago

Comment deleted

hackingonempty 22 minutes ago

Possible AI threats barely register compared to the actual rising spectre of nuclear war. The USA, long a rogue state that invaded others at is convenience, is systematically dismantling the world order installed to prevent another world war, has allowed arms control treaties to expire and is talking about developing new nuclear weapons and testing, has already threatened to invade its allies, is pulling out of treaties that might prevent mass destabilization caused by rising sea levels and climate change, and more.

The Bulletin of Atomic Scientists has good reasons to set the doomsday clock at 85 seconds to midnight, closer to doomsday than ever before.

krupan 32 minutes ago[1 more]

People stating he must have hit his equity cliff, does anyone grant equity at only a 2-year cliff?

People stating he can sell equity on a secondary market, do you have experience doing that? At the last start up I was at, it didn't seem like anyone was just allowed to do that

rdtsc an hour ago[2 more]

> AI-assisted bioterrorism

Does he know something we don't? Why specifically the "bio" kind?

krupan an hour ago

The way the safety concerns are written, I get the impression it has more to do with humans' mental health and loss of values.

I really think we are building manipulation machines. Yes, they are smart, they can do meaningful work, but they are manipulating and lying to us the whole time. So many of us end up in relationships with people who are like that. We also choose people who are very much like that to lead us. Is it any wonder that a) people like that are building machines that act like that, and b) so many of us are enamored with those machines?

Here's a blog post that describes playing hangman with Gemini recently. It very well illustrates this:

https://bryan-murdock.blogspot.com/2026/02/is-this-game-or-i...

I completely understand wanting to build powerful machines that can solve difficult problems and make our lives easier/better. I have never understood why people think that machine should be human-like at all. We know exactly how intelligent powerful humans largely behave. Do we really want to automate that and dial it up to 11?

longfacehorrace an hour ago[1 more]

Front row seats to the apocalypse would be metal af.

airocker an hour ago[1 more]

If good and bad both get amplified, I hope the equilibrium is maintained.

oxag3n an hour ago

It becomes a trend and I think it's just part of a PR campaign - AI so good and it's so close to AGI that:

* The world is doomed.

* I'm tired of success, stop this stream of 1M ARR startups popping up on my computer daily.

gaigalas an hour ago

We're in a dark age. There's only peril.

(and no, AI is not the renaissance)

cactusplant7374 an hour ago[3 more]

It sounds like a mental health crisis. So many people are experiencing them when interacting with AI.

imperio59 an hour ago[1 more]

"Well you're all f***, good luck. I'll take my millions and go live on my micro farm"

tailnode an hour ago

Translation: "I reached my vesting cliff"

If you look behind the pompous essay, he's a kid who thinks that early retirement will be more fulfilling. He's wrong, of course. But it's for him to discover that by himself. I'm willing to bet that he'll be back at an AI lab within a year.