On LLM Spirits and 'Completely New Skills'
July 3, 2025

Karpathy’s recent talk isn’t so fresh anymore and has faded into oblivion – perfect time to mention it.
What catches my attention is the main theme – software 3.0 and “human spirits.”
What’s surprising is that the industry still has two big camps: “woohoo, vibe coding!” and “oh, this is all nonsense.” The latter, with their skepticism or stupidity, have apparently hopelessly fallen behind the new pace and new frequencies in software development.
As for me and the project I continue working on most of the time – a third of the repository is actually in English. This includes prompts for individual agents, sets of entries for building vector-semantic spaces in routers, and structured output definitions that are “much more than just a prompt.”
All together it becomes quite complex when you don’t just have an agent with three functions, but already some kind of pipeline from a specific domain.
I’m getting at the fact that requirements for systems thinking in AI system development are really growing, probably almost exponentially, depending on the number of agents / pipeline nodes / requirements / and of course – complexity / uniqueness of the domain, the peculiarities of its “language.”
Back to Karpathy – he’s hyping incredibly well! In general, it’s very pleasant for any person when they get a lot of attention from other people, and when you also invent “new” words/concepts, and these other people pick them up and start repeating them… Zero percent judgment!
In Andrej’s concept pipeline there are seemingly new and interesting things – software 2.0 and 3.0, and there are slightly strange things, for example the emerging “context engineering” contrasted with “prompt engineering.”
Overall – the idea is certainly good! But I think that behind this context engineering lies exactly that big and scary requirement for systems thinking skills (let’s not even mention quality).
The trouble is that somehow this isn’t emphasized. There’s a very weak attempt to shift attention from “prompt” to “context,” while not explaining at all how to do this – everyone just writes that “this is very important, you can’t limit yourself to just good prompts, you need to take the entire context into account”!
But what context are we even talking about? The model’s window? Context in the sense of the domain? It’s unclear.
But this is still good – problems in conceptualizing this “new” skill and teaching it will surface! Well, or they won’t surface, because the problem is already in selling it as “The new skill”!
I was recently thinking about this, not very deeply, but tried to represent an IT engineer’s skills as a classification tree, even though it might not be very correct to represent them this way.
In such a model, our skills – the closer to the leaves, the closer to physical reality. Close to reality in the sense of their good understandability and maximum applicability. For example, this is what we usually write in resumes as the stack we worked with – Django, PostgreSQL, OpenAPI and so on.
Stepping up a bit, we’re still close to reality and even continue writing in resumes – REST, protobuf, MVC (it’s here due to moderate simplicity), and other “quite tangible” things, albeit through “something else.”
The higher we climb the tree, the more we enter “abstract matters.”
At some point we’ll reach the very top to the meta-type “Skill,” somewhere under it in the tree there will be division into subclasses – “technical skill” and “social skill.”
And next to them there should be “thinking skill.” Key word – should! For a long time we didn’t highlight it as a requirement, competency, confirming it indirectly through something else.
You might think that, for example, leetcode is necessarily somewhere in thinking skills, but no, this is a specific class – “skill of solving leetcode” and it’s rather a subclass of “technical skill.”
Complex relationships about how thinking skills and technical skills strengthen each other, and what relationships connect them at all, I don’t even try to model – this looks almost impossible due to the peculiarities of how neural networks work in our heads. But the very fact of “interconnection” and “strengthening” seems obvious, you just need to really dive into this, and we don’t have such a goal now. These examples I give exclusively as a hint that there are many skills, and they still have hierarchy.
Even if we make a strange assumption and send “skill of solving leetcode” to thinking skills – this doesn’t matter anymore!
What matters is that there are many other thinking skills that in our brave new world turn out to be more and more important. For example, modeling skill. Thinking itself, when it’s directed at solving a specific task – is modeling.
All this to say that Software 3.0 will be is already about this. The question is only how successful those for whom this is “completely new skill” will be, and how to learn the needed skills at all.
Thinking as “the new skill” is very powerful.
In this light, the topic “LLM will replace programmers” doesn’t seem even interesting or provocative anymore, but rather completely outdated and obvious – claude with “one shot” the day before yesterday bootstrapped a training lab for my mentee from 5 microservices, two databases and a message broker, with docker config and working code. Sure, not business code, without complex logic, but working, returning metrics! There’s very little left to finish – another 1-2 prompts.
Not so long ago, even for such a bootstrap project, how many hours would an average developer spend? Write in the comments.
Even if person-month is a mythical metric, development using AI tools is already shortening it like crazy, this is clear as day.
As for “human spirits” – well, this might seem like a very cute and funny definition, but…
We’ve long gotten used to emulating human communication with chat clients, and probably have long gotten used to the quality of this communication. Somewhere deep inside we understand how it differs from communication with a real person, right?
This is a strange feeling – you seemingly realize you’re communicating with LLM – a human can’t write/speak so quickly and coherently. But writing slowness and other quirks can be implemented quite easily, friends – the Turing Test is beaten.
Have you heard about the publication of this work? You haven’t heard, because we’re not interested in this anymore, because LLM-based systems, even in narrow, specific cases, have already far surpassed individual human capabilities.
Those very ones – applied, close to reality.
If LLM agents are “human spirits,” then those engineers who “survive” as a profession – who are they? Wizards, Necromancers… Or Neuromancers?
Whatever they’re called, I think they’ll need high levels of skill mastery both above and below. With a big emphasis on the “upper” ones.