I’m going to take some exception to something, again.
Superficially, it might seem like a good thing if our theoretical models can match real-world data. But is it? If I succeed in making a computer spit out accurate numbers from a model that is too complex for my meagre mortal mind to disentangle, can I claim to have learnt anything about the world?
In terms of improving our understanding and ability to develop new ideas and innovations, making a computer produce the same data as an experiment has little value.
I agree with that. You could also take the example that you can model any data set with a polynomial of sufficient order — that would tell you little or nothing about the actual mathematical function in play for your data (think epicycles — arbitrarily good agreement of planet positions, no insight into gravity). But I think the mistake here is extrapolating this class of models — ones that are too complex to comprehend or are otherwise not descriptive of the interactions taking place — and concluding that all precise and accurate models are bad (which is the vibe I’m getting here). It’s not the case — the whole goal of many physicists is to get models that match experiment to the highest degree possible and that we also understand the details. I think the author is cherry-picking the drawbacks of good models to make his point.
If I want to learn how an amoeba (or anything) works, by theoretical modelling, I need to leave things out of the model. Only then will I discover whether those features were important and, if so, for what.
Again, this is true as far as it goes, but only addresses one path to understanding. You can add things on to a simple model, too, or have sufficiently different effects that you know which part is contributing. Again — overselling. There are a lot of models that are built up over time as we get better data; not everyone goes in with all the potential pieces available and have to do such pruning.
Atomic physics has a model of the Hydrogen atom, and there is a basic model that predicts the gross energy-level structure; similar to the Bohr model, but even better because it gets other details correct that the Bohr model lacks. (IOW the Bohr model isn’t better because it’s simple. Simple doesn’t trump being wrong, but that’s not really my point). The simple QM model doesn’t quite work. We can add in corrections to the simple QM model and account for relativistic effect and spin-orbit interactions and this accounts for what we call the fine structure, which shifts the energy states. When we include the detail that the nucleus has a magnetic moment that affects the electrons and bingo, we get the smaller hyperfine splitting. Add some QED into the mix and you explain the Lamb shift which lifts the degeneracy of two of the levels of the first excited state.
There’s also the problem that if your model is too simple for the problem, then you have no idea if the basic idea is right, because none of the data will match up. You have to be able to construct experiments where your model is at least approximately right for reasonable hope in confirming its correctness.
There is incredible value in simple models. But the value doesn’t automatically diminish if you add some well-understood, higher-order corrections.
(The spherical cow joke is in the link. Point-cow joke and cartoon here)