Taro Logo

Paper Reading Group 4: Diving Into Open-Source LLM Mistral 7B

Event details
Paper Reading Group 4: Diving Into Open-Source LLM Mistral 7B event
Event description

Open source models are rapidly becoming better and catching up to OpenAI and Google. Mistral 7B is a 7.3 billion parameter large language model that puts up impressive numbers on benchmarks.

Mistral 7B and the new 7B mixture of experts are some of the best and most efficient open source models

Mistral's 7B model is very efficient and beats larger Llama models allowing it to be run more easily on consumer hardware

The recently released Mistral 7B Mixture of Experts with 50B params performs similar to the larger Llama70B model. Mixture of Expert models are becoming the new trend its rumored that GPT4 is also a mixture of experts model

What's in store?

In this talk, Sai will break down:

  • How the 7B and how Mixture of Expert models work,
  • How it was trained, fine-tuned, how it differs from Llama
  • What you need to know if you want to use it

Link to paper: https://arxiv.org/abs/2310.06825

Your host:

Sai Shreyas Bhavanasi has worked in Computer Vision and Reinforcement Learning and published 2 first author papers. He has also worked as an MLE and Data Analyst