New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
from_openai
doesn't work with llama-cpp-python
#603
Comments
Understood. Use And we'll need to make a PR to make sure base York exists first before we check it. |
@jxnl |
To work around the # Assuming you have your Llama object initialized as `llama`
# Directly setting the base_url attribute as a workaround
llama.base_url = "your_base_url_here"
# Proceed with your code logic... Please replace
|
the base_url issue will be fixed in 1.2.3 |
Thanks for the update. I think there's still an assert statement blocking this: ---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[6], line 28
13 model_path = hf_hub_download(
14 "TheBloke/OpenHermes-2.5-Mistral-7B-GGUF",
15 "openhermes-2.5-mistral-7b.Q4_K_M.gguf",
16 )
17 llama = llama_cpp.Llama(
18 model_path=model_path,
19 n_gpu_layers=-1,
(...)
26 verbose=False,
27 )
---> 28 create = instructor.from_openai(
29 client=llama,
30 mode=instructor.Mode.JSON_SCHEMA, # (2)!
31 )
32 message = {"role": "user", "content": "Teach me how to say `Hello` in three languages!"}
33 extraction_stream = create(
34 response_model=instructor.Partial[Translations], # (3)!
35 messages=[message],
36 stream=True,
37 )
File ~/miniconda3/envs/panel/lib/python3.10/site-packages/instructor/client.py:277, in from_openai(client, mode, **kwargs)
274 else:
275 provider = Provider.OPENAI
--> 277 assert isinstance(
278 client, (openai.OpenAI, openai.AsyncOpenAI)
279 ), "Client must be an instance of openai.OpenAI or openai.AsyncOpenAI"
281 if provider in {Provider.ANYSCALE, Provider.TOGETHER}:
282 assert mode in {
283 instructor.Mode.TOOLS,
284 instructor.Mode.JSON,
285 instructor.Mode.JSON_SCHEMA,
286 instructor.Mode.MD_JSON,
287 }
AssertionError: Client must be an instance of openai.OpenAI or openai.AsyncOpenAI |
im dumb im so sorry. |
i need to add tests for this. |
Thank you! I am trying to use |
@amaarora if ur running llama on groq, this worked for me
|
What Model are you using?
Describe the bug
A clear and concise description of what the bug is.
The documented example doesn't work
https://github.com/jxnl/instructor/blob/main/docs/hub/llama-cpp-python.md?plain=1
To Reproduce
Steps to reproduce the behavior, including code snippets of the model and the input data and openai response.
I tried to update it:
AttributeError: 'Llama' object has no attribute 'base_url'
Expected behavior
A clear and concise description of what you expected to happen.
It should not raise an error about
base_url
Screenshots
If applicable, add screenshots to help explain your problem.
The text was updated successfully, but these errors were encountered: