Classic Jevons Paradox - when something gets cheaper the market for it grows. The unit cost shrinks but the number of units bought grows more than this shrinkage.
Of course that is true. The nuance here is that software isn’t just getting cheaper but the activity to build it is changing. Instead of writing lines of code you are writing requirements. That shifts who can do the job. The customer might be able to do it themselves. This removes a market, not grows one. I am not saying the market will collapse just be careful applying a blunt theory to such a profound technological shift that isn’t just lowering cost but changing the entire process.
You say that like someone that has been coding for so long you have forgotten what it's like to not know how to code. The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem. AI is amazing at producing answers you previously would have looked up on stack overflow, which is very useful. It often can type faster that than I can which is also useful. However, if we are going to see the exponential improvements towards AGI AI boosters talk about we would have already seen the start of it.
When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.
Exactly. The customer doesn't know what's possible, but increasingly neither do we unless we're staying current at frontier speed.
AI can type faster and answer Stack Overflow questions. But understanding what's newly possible, what competitors just shipped, what research just dropped... that requires continuous monitoring across arXiv, HN, Reddit, Discord, Twitter.
The gap isn't coding ability anymore. It's information asymmetry. Teams with better intelligence infrastructure will outpace teams with better coding skills.
That's the shift people are missing.
Hey, welcome to HN. I see that you have a few LLM generated comments going here, please don’t do it as it is mostly a place for humans to interact. Thank you.
No, I’m pretty sure the models are still improving or the harnesses are, and I don’t think that distinction is all that important for users. Where were coding agents at 2025? 2024? I’m pretty amazed by the improvements in the last few months.
I'm both amazed by the improvements, and also think they are fundamentally incremental at this point.
But I'm happy about this. I'm not that interested in or optimistic about AGI, but having increasingly great tools to do useful work with computers is incredible!
My only concern is that it won't be sustainable, and it's only as great as it is right now because the cost to end users is being heavily subsidized by investment.
>The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem.
How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.
One example is programmers who would code physics simulations that run in massive data. You need a decent amount of software engineering skills to maintain software like that but the programmer maybe has a BS in Physics but doesn’t really know the nuances of the actual algorithm being implemented.
With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.
Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.
Yes, I made the same point. Customers are not as dumb as our PMs and Execs think they are. They know their needs more than us, unless its about social media and banks.
I agree. People forget that people know how to use computers and have a good intuition on what they are capable of. Its the programming task that many people cant do. Its unlocking users to solve their own problems again
Have you ever paid for software? I have, many times, for things I could build myself
Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.
Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI
No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them
You are missing the point. Who said anything about turning what they make into a “business”. Software you maintain merely for yourself has no such overhead.
There are also technical requirements, which, in practice, you will need to make for applications. Technical requirements can be done by people that can't program, but it is very close to programming. You reach a manner of specification where you're designing schemas, formatting specs, high level algorithms, and APIs. Programmers can be, and are, good at this, and the people doing it who aren't programmers would be good programmers.
At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.
I think it's like super insane people think that anyone can just "code" an app with AI and that can replace actual paid or established open-source software, especially if they are not a programmer or know how to think like one. It might seem super obvious if you work in tech but most people don't even know what an HTTP server is or what is pytho, let alone understanding best practices or any kind of high-level thinking regarding applications and code. And if you're willing to spend that time in learning all that, might as well learn programming as well.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
I think this comment will not age well. I understand where you are coming from. You are missing the idea that infrastructure will come along to support vibe coding. You are assuming vibe coding as it stands today will not be improved. It will get to the point where the vibe coder needs to know less and less about the underlying construction of software.
The way I would approach writing specs and requirements as code would be to write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests. Then let someone else maybe AI write the implementation as a set of concrete classes and then verify that those unit-tests pass.
I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.
The problem is that a sufficient black box description of a system is way more elaborate then the white box description of the system or even a rigorous description of all acceptable white boxes (a proof). Unit tests contain enough information to distinguish an almost correct system from a more correct one, but there is way more information needed to even arrive at the almost correct system. Also even the knowledge which traits likely separate an almost correct one from the correct one likely requires a lot of white box knowledge.
Unit tests are the correct tool, because going from an almost correct one to a correct one is hard, because it implies the failure rate to be zero and the lower you go the harder it is to reduce the failure rate any further. But when your constraint is not infinitesimal small failure rate, but reaching expressiveness fast, then a naive implementation or a mathematical model are a much denser representation of the information, and thus easier to generate. In practical terms, it is much easier to encode the slightly incorrect preconception you have in your mind, then try to enumerate all the cases in which a statistically generated system might deviate from the preconception you already had in your head.
“write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests.”
An exhaustive set of use cases to confirm vibe AI generated apps would be an app by itself. Experienced developers know what subsets of tests are critical, avoiding much work.
I agree (?) that using AI vibe-coding can be a good way to prooduce a prototype for stakeholders to see if the AI-output is actually something they want.
The problem I see is how to evolve such a prototype to more correct specs, or changed specs in the future, because AI output is non-deterministic -- and "vibes" are ambiguous.
Giving AI more specs or modified specs means it will have to re-interpret the specs and since its output is non-deterministic it can re-interpret viby specs differently and thus diverge in a new direction.
Using unit-tests as (at least part of) the spec would be a way to keep the specs stable and unambiguous. If AI is re-interpreting the viby ambiguous specs, then the specs are unstable which measn the final output has hard-time converging to a stable state.
I've asked this before, not knowing much about AI-sw-development, whether there is an LLM that given a set of unit-tests, will generate an implementation that passes those unit-tests? And is such practice used commonly in the community, and if not why not?
> Experienced developers know what subsets of tests are critical, avoiding much work.
And, they do know this for the programs written by other experienced developers, because they know where to expect "linearity" and were to expect steps in the output function. (Testing 0, 1, 127, 128, 255, is important, 89 and 90 likely not, unless that's part of the domain knowledge) This is not necessarily correct for statistically derived algorithm descriptions.
That depends a bit on whether you view and use unit-tests for
a) Testing that the spec is implemented correctly, OR
b) As the Spec itself, or part of it.
I know people have different views on this, but if unit-tests are not the spec, or part of it, then we must formalize the spec in some other way.
If the Spec is not written in some formal way then I don't think we can automatically verify whether the implementation implements the spec, or not. (that's what the cartoon was about).
> then we must formalize the spec in some other way.
For most projects, the spec is formalized in formal natural language (like any other spec in other professions) and that is mostly fine.
If you want your unit tests to be the spec, as I wrote in https://news.ycombinator.com/item?id=46667964, there would be quite A LOT of them needed. I rather learn to write proofs, then try to exhaustively list all possible combinations of a (near) infinite number of input/output combinations. Unit-tests are simply the wrong tool, because they imply taking excerpts from the library of all possible books. I don't think that is what people mean with e.g. TDD.
What the cartoon is about is that any formal(-enough) way to describe program behaviour will just be yet another programming tool/language. If you have some novel way of program specification, someone will write a compiler and then we might use it, but it will still be programming and LLMs ain't that.
Anecdote: I have decades of software experience, and am comfortable both writing code myself and using AI tools.
Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.
I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.
I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.
Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).
I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.
> I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.
China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
Thank you for the insight. Those countries you listed are nowhere near US salaries. I wonder what the SASS market is like in Europe? I hear its utilized but that the problem is that there is too much reliance on American companies.
I hear those other Asian countries are just like China in terms of adoption.
>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
When I worked in China for Microsoft China, I was making 60-70% what I would have made back in the US working the same job, but my living expenses actually kind of made up for that. I learned that most of my non-Chinese asian colleagues were in it for the money instead of just the experience (this was basically my dream job, now I have to settle for working in the states for Google).
> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).
I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.
Fair point! And I wasn't clear: my anecdote was me, personally, needing an instance of some software. Rather than me personally either write it by hand, or even write it using AI, and then host it, I just found an off-the-shelf solution that worked well enough for me. One less thing I have to think about.
I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.
Does that automatically translate into more openings for the people whose full time job is providing that thing? I’m not sure that it does.
Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.
So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.
In such a world, the number of “lines of code” being used would be much greater that today.
But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.
> Does that automatically translate into more openings for the people whose full time job is providing that thing?
Not automatically, no.
How it affects employment depends on the shapes of the relevant supply/demand curves, and I don't think those are possible to know well for things like this.
For the world as a whole, it should be a very positive thing if creating usable software becomes an order of magnitude cheaper, and millions of smart people become available for other work.
Given the products that the software industry is largely focused on building (predatory marketing for the attention economy and surveillance), this unfortunately may be the case.
I debate this in my head way to much & from each & every perspective.
Counter argument - if what you say is true, we will have a lot more custom & personalized software and the tech stacks behind those may be even more complicated than they currently are because we're now wanting to add LLMs that can talk to our APIs. We might also be adding multiple LLMs to our back ends to do things as well. Maybe we're replacing 10 but now someone has to manage that LLM infrastructure as well.
My opinion will change by tomorrow but I could see more people building software that are currently experts in other domains. I can also see software engineers focusing more on keeping the new more complicated architecture being built from falling apart & trying to enforce tech standards. Our roles may become more infra & security. Less features, more stability & security.
Jevon's Paradox does not last forever in a single sector, right? Take manufacturing business for example. We can make more and more stuff with increasingly lower price, yet we ended up outsourcing our manufacturing and the entire sector withered. Manufacturing also gets less lucrative over the years, which means there has been less and less demand of labor.
You're right. I updated it to "in a single sector". The context is about the future demand of software engineers, hence I was wondering if it would be possible that we wouldn't have enough demand for such profession, despite that the entire society will benefit for the dropping unit cost and probably invented a lot of different demand in other fields.
I'm quite convinced that software (and, more broadly, implementing the systems and abstractions) seems to have virtually unlimited demand. AI raises the ceiling and broadens software's reach even further as problems that previously required some level of ingenuity or intelligence can be automated now.
Jevons paradox is the stupid. What happened in the past is not a guarantee for the future. If you look at the economy, you would struggle to find buyers for any slop AI can generate, but execs keep pushing it. Case in point the whole Microslop saga, where execs start treating paying customers as test subjects to please the share holders.
A good example is Many users looking to ditch Windows for Linux due to AI integrations and generally worse user experience. Is this the year of linux desktop?
> Classic Jevons Paradox - when something gets cheaper the market for it grows. The unit cost shrinks but the number of units bought grows more than this shrinkage.
That's completely disconnected from whether software developer salaries decrease or not, or whether the software developer population decreases or not.
The introduction of the loom introduced many many more jobs, but these were low-paid jobs that demanded little skill.
All automation you can point to in history resulted in operators needing less skill to produce, which results in less pay.
There is no doubt (i.e. I have seen it) that lower-skilled folk are absolutely going to crush these elitists developers who keep going on about how they won't be affected by automated code-generation, it will only be those devs that are doing unskilled mechanical work.
Sure - because prompting requires all that skill you have? Gimme a break.