It's no longer just a moral panic, it's a global media meltdown. Geoffrey Hinton quit Google (after years of working on literal AI-assisted world domination) to conduct interviews on any outlet who'll have him, having very public realizations about the completely unexpected and sudden dangers of AI development. There is no media site or news channel on the planet NOT talking about the horrible, terrible, world-destroying dangers of AI right now. As a response, the general population is panicking as well, and inevitably, so are politicians. What's going on? Are LLMs really signalling the end times?
LLMs like GPT are pre-trained and cannot continue learning in their current form. It is infeasible to retrain gigantic models from scratch every few months. We must develop strategies to overcome this limitation and enable continuous learning in language models.
History
The development of Transformer models in AI research is built upon a rich scientific heritage spanning several decades. Key milestones and contributions from basic neural networks (NNs) to modern Transformer architectures include:
Large Language Models (LLMs), such as GPT-4, represent words using a combination of tokenization, word embeddings, and context information.