logo

AI is not a coworker, it's an exoskeleton

Posted by benbeingbin |3 hours ago |92 comments

random3 a few seconds ago

I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.

AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.

The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.

There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.

hintymad 34 minutes ago[3 more]

In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

TrianguloY 7 minutes ago

I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.

Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.

I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.

Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.

datakazkn 13 minutes ago

The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.

The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.

oxag3n an hour ago[2 more]

> We're thinking about AI wrong.

And this write up is not an exception.

Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."

So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).

finnjohnsen2 an hour ago[2 more]

I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.

I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.

m_ke an hour ago[7 more]

It's the new underpaid employee that you're training to replace you.

People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.

If you can record a human doing anything on a computer, we'll soon have a way to automate it

protocolture 13 minutes ago

Petition to make "AI is not X, but Y" articles banned or limited in some way.

ottah 14 minutes ago

Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).

delichon 2 hours ago[2 more]

If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.

pavlov an hour ago[1 more]

> “The AI handles the scale. The human interprets the meaning.”

Claude is that you? Why haven’t you called me?

yifanl an hour ago[2 more]

AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.

acjohnson55 41 minutes ago[1 more]

> Autonomous agents fail because they don't have the context that humans carry around implicitly.

Yet.

This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.

bGl2YW5j an hour ago

I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.

xlerb an hour ago[3 more]

Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.

Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.

The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.

givemeethekeys an hour ago[1 more]

Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.

dwheeler an hour ago

I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.

cranberryturkey 16 minutes ago

The exoskeleton metaphor is closer than most analogies but it still undersells one thing: exoskeletons augment existing capability along the same axis. AI augments along orthogonal axes too.

Running 17 products as an indie maker, I've found AI is less "do the same thing faster" and more "attempt things you'd never justify the time for." I now write throwaway prototypes to test ideas that would have died as shower thoughts. The bottleneck moved from "can I build this" to "should I build this" — and that's a judgment call AI makes worse, not better.

The real risk of the exoskeleton framing is that it implies AI makes you better at what you already do. In practice it makes you worse at deciding what to do, because the cost of starting is near zero but the cost of maintaining and shipping is unchanged.

hintymad an hour ago

Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.

ge96 an hour ago

It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.

But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"

xnx an hour ago[5 more]

An electric bicycle for the mind.

mikkupikku an hour ago

Exoskeletons sound cool but somebody please put an LLM into a spider tank.

functionmouse an hour ago

blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken

How typical!

blibble an hour ago

an exoskeleten made of cheese

lukev an hour ago[1 more]

Frankly I'm tired of metaphor-based attempts to explain LLMs.

Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.

These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.

A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.

But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.

sibeliuss 42 minutes ago

This utterly boring AI writing. Go, please go away...

filipeisho an hour ago[2 more]

By reading the title, I already know you did not try OpenClaw. AI employees are here.