Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a .gitignore and make HF deps fully optional #323

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

muellerzr
Copy link

Enable Accelerate integration by fully making HF deps optional

What does this add?

This PR adds import guards across unsloth for the various integration libs, making sure that core imports are still possible without triggering external lib imports.

It does so by following an is_x_available workflow, similar to what we use at HF.

This PR also adds in a .gitignore relative to working with a python file, as I found it a bit cumbersome not being able to do git add .. If we want to remove it, that's quite alright 馃槈

Who is it for?

Users of unsloth who want to try out the cool gradient offloading mechanism, while only having the core parts of unsloth installed.

Why is it needed?

There are areas in the code that do a large deal of patching to transformers and peft. This simply guards said patching so its only done if the lib is available.

What parts of the API does this impact?

User-facing:

None

Internal structure:

Adds new library imports checks for:

  • bitsandbytes
  • peft
  • transformers
  • flash_attn

@danielhanchen
Copy link
Contributor

@muellerzr Thanks again!! Sorry was caught up in stuff - will review today!

@muellerzr
Copy link
Author

No worries @danielhanchen, no rush :) the integration on our side can still be made, just not merged until this and I probably won鈥檛 get to that until later today/tommorow, so plenty of time!

One thing I鈥檒l look at eventually is Accelerate is device-agnostic, so the torch.cuda() specifics may need to eventually be agnostic in some form.

That may mean eventually choosing this implementation for GPUs, and accelerate mimicking it but with appropriate autograd decorators. We shall see, and find a good way to share credit and combine efforts still :)

I鈥檓 not wondering if it can just be torch.amp.custom_fwd also but haven鈥檛 dug into it yet

@danielhanchen
Copy link
Contributor

@muellerzr Sorry just got to this - I ran T4 and L4 on some examples and nothing seems broken + had a look through the code!

In terms of autocast - I can add an update to make the autocast work on all devices - I need to check again how (forgot lol)

For moving to GPU and back, I think there can be a way forward - let me investigate first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants