stephenlf 3 hours ago
Didn’t realize this was science fiction.
mohsen1 3 hours ago
I think builders are gonna be fine. The type of programmer were people would put up with just because they could really go in their cave for a few days and come out with a bug fix that nobody else on the team could figure out is going to have a hard time.
Interestingly AI coding is really good at that sort of thing and less good at fully grasping user requirements or big picture systems. Basically things that we had to sit in meetings a lot for.
ossa-ma 4 hours ago
This is the time for bold predictions, you’ve just told us we’re in a crucible moment yet you end the article passively….
singpolyma3 2 hours ago
If you are part of the requirements process. If you find problems to solve and solve them. If you push back on requirements when they are not reasonable. Etc. Then you still have a career and I don't see anything coming for you soon.
bopbopbop7 2 hours ago
So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.
szczepano an hour ago
Xiol 3 hours ago
Tokens are free now?
tolerance an hour ago
The Pollyanas have a point but overstate it. The naysayers should be more cautious though.
pvtmert 2 hours ago
It hasn't. Large enterprises currently footing the bill, essentially subsidizing AI for now.
I constantly see comparisons between the 200$ Claude-Code Max subscription vs 6-figure (100k$) salary of an engineer.
The comparison here is, first of all, not apples-to-apples. Let's correct CC subscription to the yearly amount first; 12x200=2400$. Still more than 10x difference compared to the human engineer.
Although when you have the human engineer, you also pay for the experience, responsibility, and you somewhat transfer liability (especially when regulations come into play)
Moreover, creation by a human engineer, unless stolen IP or was plagiarized, owned by you as the employer/company. Meanwhile, whatever AI generated is guaranteed to be somewhat plagiarized in the first place. The ownership of the IP is questionable.
This is like when a layman complains when the electrician comes to their house, identifies the breaker problem, replaces the breaker which costs 5$ and charges 100$ for 10-minute job. Which is complete underestimation of skill, experience, and safety. A wrong one may cause constant circuit-breaks, causing malfunction in multitude of electronic devices in the household. Or worse, may cause a fire. When you think you paid 100$ for 10-minutes, in fact it was years of education, exams, certification, and experience you had paid for your future safety.
The same principle applies to the AI. It seems like it had accumulated more and more experience, but failing at the first prompt-injection. It seems like getting better at benchmarks because they are now part of their dataset. All these are hidden-costs 99% does not talk about. All these hidden costs are liabilities.
You may save an engineer's yearly salary today, at the cost of losing ten times more to the civil-lawsuits tomorrow. (Of course, depending on a field/business)
If your business was not that critical to get a civil-lawsuit in the first place, then you probably didn't needed to hire an engineer yourself. You could hire an agency/contractor to do that in much cheaper way, while still sharing liability...
ausbah 2 hours ago
like ok the cost for anyone to generate almost always working code has dropped to zero but how does a lay person verify the code satisfies business logic? asking the same set to generate tests to that just seems to move the goalposts
or like what happens when the next few years of junior engineers (or whatever replaces programming as a field)who’ve been spoon fed coding through LLMs need to actually decipher LLM output and pinpoint something the machine can’t get right after hours of prompting? a whole generation blindly following a tool they cant reliably control?
but maybe I am just coping because it feels like the ladder on the rest of my already short career , but some humility m
lefrenchy 2 hours ago
falloutx 3 hours ago
Zero if you dont consider Anthropic's API pricing, the prompter's hourly rate and verification bottleneck.
aleda145 3 hours ago
PostOnce 3 hours ago
However, let's suppose the alternate case:
If AI works as claimed, people in their tens of millions will be out of work.
New jobs won't be created quickly enough to keep them occupied (and fed).
Billionaires own the media and the social media and will use it to attempt to prevent change (i.e. apocalyptic taxation)
What, then, will those people do? Well, they say "the devil makes work for idle hands", and I'm curious what that's going to look like.
monero-xmr 3 hours ago
Furthermore if we were truly in the utopia the author describes, why do all the LLM companies employ (and pay top dollar) for so many engineers? Why does OpenAI pay for slack when they could just vibe code a chat app in an hour?
The challenge of building a real, valuable software business (or any business) is so much harder than using LLM to prompt “build me a successful software business”
rc-1140 2 hours ago
zb3 2 hours ago
Then go and throw your $0 to fix some real bugs on GitHub.. really, if AI works so well, why are all those issues still open?
Look, almost 2K issues open here: https://github.com/emscripten-core/emscripten/issues
If AI really works like non-technical people think it does, why doesn't Google just throw their AI tool to fix them all?
kgraves 3 hours ago
Playing software maintainer while many vibe coded web apps aren't built with proper software architecture or practices only makes the swing back to senior engineers being in demand a possibility.
Good luck to those who are building 600K-LOC vibe coded web apps with 40+ APIs stitched together.
jongjong 3 hours ago
Whatever you produce, nobody is going to use unless you produce it under the banner of Big Tech. There are no real opportunities for real founders.
The problem is spreading beyond software. The other day, I found out there is a billion dollar company whose main product is a sponge... Yes, a sponge, for cleaning. We're fast moving towards a communist-like centrally planned economy, but with a facade of meritocracy where there is only one product of each kind and no room for competition.
This feeling of doom that software engineers started to feel after LLMs is how I was feeling 5 years earlier. People are confused because they think the problem is that AI is automating them but reality is that our problems arise from a deeper systemic issue at the core of our economic system. AI is just a convenient cover story, it's not the reason why we are doomed. Once people accept that we can start to work towards a political solution like UBI or better...
We've reached the conclusion of Goodhart's Law "When a measure becomes a target, it ceases to be a good measure" - Our economic system has been so heavily monitored and controlled in every aspect that is has begun to fail in every aspect. Somebody has managed to profit from every blindspot, every exploit exposed by the measurement and control apparatus. Everything is getting qualitatively worse in every way that is not explicitly measured and the measurement apparatus is becoming increasingly unreliable... Most problems we're experiencing are what people experienced during the fall of communism except filter bubbles are making us completely blind to the experience of other people.
I think if we don't address the root causes, things will get worse for everyone. People's problems will get bigger, become highly personalized, pervasive, inexplicable, unrelatable. Everyone will waste their energy trying to resolve their own experience of the symptoms but the root causes would remain.
IhateAI_2 3 hours ago
Software isnt going to become more economically valuable its going to be used to replace economic inputs of labor with units of compute.
Its entirely intended to take humans out of the equation or devalue human labor and it always has. Dont be a fool.