Skip to content

Latest commit

 

History

History
32 lines (26 loc) · 737 Bytes

README.md

File metadata and controls

32 lines (26 loc) · 737 Bytes

Fine tune LLM on Vietnamese Elementary Maths Solving

This is the LoRa checkpoint which fine tuned Meta-Math-Mistral-7B

  1. Install dependecies
pip install -r requirements.txt

or

conda env create -f environment.yml
  1. Configure the root directory
cd scripts

Replace BASE_DIR environment variable value with the absolute path to the project directory.

  1. Training End-to-end fine tuning
bash fine_tune.sh

Or using LoRa

bash lora_fine_tune.sh
  1. Inference

Check out this notebook for more detail.