We’re excited to announce a new series of research documents, aimed at developers, where we’ll examine a progression of techniques for performing AI classification tasks on small datasets. In this series, we’ll start with home-brewed approaches like fine-tuning open-source neural networks, and work our way up to rapidly evolving language model extensions like agent frameworks.

In this series, we’ll be producing the following four technical studies:

Study 1: Deep learning and data augmentation basics In this study, we discuss techniques for improving a model’s performance on a classification task with a very small dataset. Our goal is to show different ways to get more out of your data on a commonly used, pretrained NLP model.

Study 2: Open-source NLP model comparison In this study, we discuss techniques for improving a model’s performance on a classification task with a very small dataset. Our goal is to show different ways to get more out of your data on a commonly used, pretrained NLP model.

Study 3: Classification with Large Language Models Ever heard of ‘em? Study 3 is where we’ll talk about how the big players like Meta’s LLaMA 3 and OpenAI’s GPT model series can do wonders for you.

Study 4: Agent Frameworks We’ll finish the series with a discussion of agent programming frameworks like Microsoft AutoGen and crewAI that are built on top of the models from part 3. Some big names are calling attention to agents - you won’t want to miss this one!

Enjoy the series and please reach out to sam@stelerivers.com or shane@stelerivers.com if you’d like to chat with us!