Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception: [500] Internal Server Error #35

Open
dsbyprateekg opened this issue Jan 26, 2024 · 2 comments
Open

Exception: [500] Internal Server Error #35

dsbyprateekg opened this issue Jan 26, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@dsbyprateekg
Copy link

Hi,

After uploading a PDF, I am able to see the screen below-
image

But it is showing error while returning a response like below-
Traceback (most recent call last): File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\_common.py", line 203, in _try_raise response.raise_for_status() File "d:\nvidia_learning\llm-env\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions/8f4118ba-60a8-4e6b-8574-e38a4067a4a3

Is this from the endpoint? Please suggest how to resolve this?

Complete logs are-
`The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "d:\nvidia_learning\llm-env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "D:\NVIDIA_Learning\nvidia_streamlit_llm__main.py", line 126, in
for response in chain.stream({"input": augmented_user_input}):
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2375, in _transform
for output in final_pipeline:
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\output_parsers\transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1473, in _transform_stream_with_config
final_input: Optional[Input] = next(input_for_tracing, None)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 249, in stream
raise e
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 233, in stream
for chunk in self._stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\chat_models.py", line 123, in _stream
for response in self.get_stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 484, in get_stream
return self.client.get_req_stream(self.model, stop=stop, payload=payload)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 371, in get_req_stream
self._try_raise(response)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 218, in _try_raise
raise Exception(f"{title}\n{body}") from e
Exception: [500] Internal Server Error
Internal error while making inference request`

@shubhadeepd
Copy link
Collaborator

Hi @dsbyprateekg thanks for reporting this!
There seems to be some stability issues in the server backend which causes this.
Are you able to reproduce this error consistently for all queries?

@shubhadeepd shubhadeepd self-assigned this Jan 29, 2024
@shubhadeepd shubhadeepd added the bug Something isn't working label Jan 29, 2024
@dsbyprateekg
Copy link
Author

Hi @dsbyprateekg thanks for reporting this! There seems to be some stability issues in the server backend which causes this. Are you able to reproduce this error consistently for all queries?

Yes @shubhadeepd . Error is reproducible for all types of input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants