Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable users to easily run the README example on cloud GPUs with “Open in Studio” badge #52

Closed
wants to merge 3 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,11 @@ huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original

## Quick Start

You can follow the steps below to quickly get up and running with Llama 3 models. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes).
<a target="_blank" href="https://lightning.ai/lightning-ai/studios/chat-with-llama-3-llm-by-meta-ai">
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/>
</a>

You can follow the steps below to quickly get up and running with Llama 3 models or use the Lightning AI Studio linked above. These steps will let you run quick inference locally, or self-hosted in the Studio. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes).

1. In a conda env with PyTorch / CUDA available clone and download this repository.

Expand Down