I was thinking about this the other day. If we did a plot of 'model ability' vs 'computational resources' what kind of relationship would we see? Is the improvement due to algorithmic improvements or just more and more hardware?
i don't think adding more hardware does anything except increase performance scaling. I think most improvement gains are made through specialized training (RL) after the base training is done. I suppose more GPU RAM means a larger model is feasible, so in that case more hardware could mean a better model. I get the feeling all the datacenters being proposed are there to either serve the API or create and train various specialized models from a base general one.
Not really. A 100 loc "harness" that is basically a llm in a loop with just a "bash" tool is way better today than the best agentic harness of last year.