Prompt enables running open-source LLMs optimized for Mac M1 machines with 16GB of memory. Its aim is to make LLMs accessible, eliminating the need for expensive cloud infrastructure and Nvidia chips. Additionally, it leverages open-source models from Huggingface that are proven to perform well on Mac M1s, ensuring a good experience on common developer hardware.
Built on top of:
git clone https://github.com/opszero/prompt && cd prompt && ./build.sh
- TheBloke/WizardLM-7B-V1.0-Uncensored-GGML - Noncommercial
- TheBloke/WizardLM-13B-V1.0-Uncensored-GGML
cd examples
./job-post-extract-company-name-wizardlm-7b.sh
./job-post-extract-company-name-wizardlm-13b.sh
- CRD716/ggml-vicuna-1.1-quantized - Noncommercial
cd examples && ./job-post-extract-company-name-vicuna-7b.sh
- TheBloke/MPT-7B-Instruct-GGM - Commercial
cd examples && ./job-post-extract-company-name-mpt5-7b.sh
- RachidAR/falcon-7B-ggml - Commerical
cd examples && ./job-post-extract-company-name-falcon-7b.sh
- TheBloke/Llama-2-7B-GGMLl - Commerical
cd examples && ./job-post-extract-company-name-llama-2-7b.sh