🐑Welcome to Alpaca

Stanford Alpaca, aims to build and share an instruction-following LLaMA model which codes and document teachable data into Stanford Alpaca's models. Model will generate the data based on users.

Overview

Here are a couple of example overviews from Stanford Alpaca

Stanford Alpaca GPT will be available Offline where it can be a useful tools for all.

The current Alpaca model is fine-tuned from a 7B LLaMA model [1] on 52K instruction-following data generated by the techniques in the Self-Instruct [2] paper, with some modifications that we discuss in the next section. In a preliminary human evaluation, we found that the Alpaca 7B model behaves similarly to the text-davinci-003 model on the Self-Instruct instruction-following evaluation suite [2].

Our initial release contains the data generation procedure, dataset, and training recipe. We intend to release the model weights if we are given permission to do so by the creators of LLaMA. For now, we have chosen to host a live demo to help readers better understand the capabilities and limits of Alpaca, as well as a way to help us better evaluate Alpaca's performance on a broader audience.

Last updated