History
The development of Transformer models in AI research is built upon a rich scientific heritage spanning several decades. Key milestones and contributions from basic neural networks (NNs) to modern Transformer architectures include:
Large Language Models (LLMs), such as GPT-4, represent words using a combination of tokenization, word embeddings, and context information.
The academic publishing industry in general, and Elsevier specifically, are a curse upon academia and human progress in general. But they only have power if we give it to them. There is still hope that one of these days, young academics will choose to simply not publish there anymore, and when the old guard dies off so will interest in the old information silos.