Chapter 7: Finetuning to Follow Instructions#
Main Chapter Code#
01_main-chapter-code contains the main chapter code and exercise solutions
Bonus Materials#
02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
06_user_interface implements an interactive user interface to interact with the pretrained LLM