n the whirlwind of current AI advancements, trying to keep up with every new change can be dizzying. But unless you're an active researcher in the field, you really don't need to. You just need to keep abreast of the aggregate, because that's where the real story is for the rest of us.
Large Language Models (LLMs) have created a paradigm shift in computing. One of the nice things about paradigm shifts, though, is that we know how they work. In 1962, philosopher of science Thomas Kuhn described the anatomy of a paradigm shift (and also coined the phrase itself) in The Structure of Scientific Revolutions.
In the early phases of a new paradigm, there is an avalanche of "normal science," where the new paradigm begets huge amounts of discovery based on the new premises and techniques. However, this normal science is incremental, each new advancement pushes the field only marginally. Kuhn called it "puzzle-solving," it's applying the new paradigm in specific contexts to fit specific problems. It is only in aggregate, from the middle distance, that we can see the net effect of these incremental changes, or the abstractions that cut across solutions for multiple puzzles. It is only from a distance that the larger patterns of the new paradigm begin to emerge.
For business and engineering leaders focused on improving customers' or clients' lives using AI, there is no utility in following every single advancement, in knowing in detail all the changes impacting the technical landscape. You don't need to dive into questions of LoRA vs QLoRA fine-tuning, or whether Kolmogorov-Arnold networks provide a marginal improvement over multilayer perceptrons. It may be valuable to know that we're making strides in both pre-training and fine-tuning costs, that it's getting easier and cheaper to run inference against multiple models at the same time, and that context windows are expanding. But even that might be more fine-grained than your requirements need.
So don't worry about every detail. Understand the general architecture of the kinds of problems you hope to solve with AI, where the existing bottlenecks are, and keep an eye there. And spend your extra time reading some helpful philosophy of science.