Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems when trying to use Llama-2 models for function calling #10

Open
nmm001001 opened this issue Nov 29, 2023 · 2 comments
Open

Problems when trying to use Llama-2 models for function calling #10

nmm001001 opened this issue Nov 29, 2023 · 2 comments

Comments

@nmm001001
Copy link

When I run the following command:

generator = Generator.hf(functions, "meta-llama/Llama-2-7b-chat-hf")

I get the following error:


AttributeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 function_call = generator.generate("What is the weather like today in Delaware?")
2 print(function_call)

File /local_llm_function_calling/generator.py:189, in Generator.generate(self, prompt, function_call, max_length, max_new_tokens, suffix)
174 """Generate the function call
175
176 Args:
(...)
186 FunctionCall: The generated function call
187 """
188 function_name = self.choose_function(prompt, function_call, suffix)
--> 189 arguments = self.generate_arguments(
190 prompt, function_name, max_new_tokens, max_length
191 )
192 return {"name": function_name, "parameters": arguments}

File /local_llm_function_calling/generator.py:157, in Generator.generate_arguments(self, prompt, function_call, max_length, max_new_tokens)
147 prefix = self.prompter.prompt(prompt, self.functions, function_call)
148 constraint = JsonSchemaConstraint(
149 [
150 function
(...)
155 ] # type: ignore
156 )
--> 157 generated = self.constrainer.generate(
158 prefix,
159 constraint,
160 max_length,
161 max_new_tokens,
162 )
163 validated = constraint.validate(generated)
164 return generated[: validated.end_index] if validated.end_index else generated

File /local_llm_function_calling/constrainer.py:221, in Constrainer.generate(self, prefix, constraint, max_len, max_new_tokens)
219 generation = self.model.start_generation(prefix)
220 for _ in range(max_new_tokens) if max_new_tokens else count():
--> 221 if self.advance_generation(generation, constraint, max_len):
222 break
223 return generation.get_generated()

File /local_llm_function_calling/constrainer.py:191, in Constrainer.advance_generation(self, generation, constraint, max_len)
173 def advance_generation(
174 self,
175 generation: Generation,
176 constraint: Callable[[str], tuple[bool, bool]],
177 max_len: int | None = None,
178 ) -> bool:
179 """Advance the generation by one token
180
181 Args:
(...)
189 bool: Whether the generation is complete
190 """
--> 191 done, length = self.gen_next_token(generation, constraint)
192 if done:
193 return True

File /local_llm_function_calling/constrainer.py:163, in Constrainer.gen_next_token(self, generation, constraint)
161 except SequenceTooLongError:
162 return (True, 0)
--> 163 for token in sorted_tokens:
164 generated = generation.get_generated(token)
165 fit = constraint(generated)

File /local_llm_function_calling/model/huggingface.py:63, in HuggingfaceGeneration.get_sorted_tokens(self)
54 def get_sorted_tokens(self) -> Iterator[int]:
55 """Get the tokens sorted by probability
56
57 Raises:
(...)
61 The next of the most likely tokens
62 """
---> 63 if self.inputs.shape[1] >= self.model.config.n_positions:
64 raise SequenceTooLongError()
65 gen_tokens = self.model.generate(
66 input_ids=self.inputs,
67 output_scores=True,
(...)
70 pad_token_id=self.tokenizer.eos_token_id,
71 )

File /function_calling_env/lib/python3.11/site-packages/transformers/configuration_utils.py:262, in PretrainedConfig.getattribute(self, key)
260 if key != "attribute_map" and key in super().getattribute("attribute_map"):
261 key = super().getattribute("attribute_map")[key]
--> 262 return super().getattribute(key)

AttributeError: 'LlamaConfig' object has no attribute 'n_positions'

Has anyone been able to successfully run this with Llama-2 models? If so did you run into this problem and how did you fix it?

@Nuclear6
Copy link

image gpt2 model have this config: n_positions, but llama2 none.

@Nuclear6
Copy link

image

Modify this file to run the llama2 model: local_llm_function_calling/model/huggingface.py

image

change to:

image

I solved this problem using this method, I hope it will be useful to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants