Paper Reading: A History of Pretrained Foundation Models

Event details
Paper Reading: A History of Pretrained Foundation Models event
🎥 This event was not recorded. ❌
Event description

Foundation models have become the hot new way to build powerful AI systems without the hardware and cost required for training. With open-source models like LLaMA 2, Mistral 7B, and LLaVA available on Hugging Face, the AI field has exploded with interest as hobbyists and professionals alike scramble to adapt these pretrained models for downstream tasks.

In contrast to AI systems built for specialized purposes (like recommendation engines and image classifiers), foundation models are general-purpose tools that can take the knowledge they learn from one task and apply it to another. Two of the most exciting properties of foundation models are that they exhibit emergent behavior and homogeneity, the latter meaning they seem to be trending towards one singular learning algorithm that can be used to power a wide range of applications, foreshadowing artificial general intelligence (AGI).

In this session, we will go over the history of pretrained foundation models, some of the techniques employed by NLP, CV, and GL foundation models, the current state-of-the-art models, and their current challenges.

Paper link: A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT

Recommended additional reading: On the Opportunities and Risks of Foundation Models