

All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement.
I disagree. Scaling might seem trivial now, but the state-of-the-art architectures for NLP a decade ago (LSTMs) would not be able to scale to the degree that our current methods can. Designing new architectures to better perform on GPUs (such as Attention and Mamba) is a legitimate advancement. Furthermore, the viability of this level of scaling wasn’t really understood for a while until phenomenon like double descent (in which test error surprisingly goes down, rather than up, after increasing model complexity past a certain degree) were discovered.
Furthermore, lots of advancements were necessary to train deep networks at all. Better optimizers like Adam instead of pure SGD, tricks like residual layers, batch normalization etc. were all necessary to allow scaling even small ConvNets up to work around issues such as vanishing gradients, covariate shift, etc. that tend to appear when naively training deep networks.
This also isn’t an accurate characterization IMO. LLMs and ML algorithms in general can generalize to unseen problems, even if they aren’t perfect at this; for instance, you’ll find that LLMs can produce commands to control robot locomotion, even on different robot types.
“Reasoning” here is based on chains of thought, where they generate intermediate steps which then helps them produce more accurate results. You can fairly argue that this isn’t reasoning, but it’s not like it’s traversing a fixed knowledge graph or something.