You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After uploading a PDF, I am able to see the screen below-
But it is showing error while returning a response like below- Traceback (most recent call last): File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\_common.py", line 203, in _try_raise response.raise_for_status() File "d:\nvidia_learning\llm-env\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions/8f4118ba-60a8-4e6b-8574-e38a4067a4a3
Is this from the endpoint? Please suggest how to resolve this?
Complete logs are-
`The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\nvidia_learning\llm-env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "D:\NVIDIA_Learning\nvidia_streamlit_llm__main.py", line 126, in
for response in chain.stream({"input": augmented_user_input}):
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2375, in _transform
for output in final_pipeline:
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\output_parsers\transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1473, in _transform_stream_with_config
final_input: Optional[Input] = next(input_for_tracing, None)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 249, in stream
raise e
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 233, in stream
for chunk in self._stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\chat_models.py", line 123, in _stream
for response in self.get_stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 484, in get_stream
return self.client.get_req_stream(self.model, stop=stop, payload=payload)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 371, in get_req_stream
self._try_raise(response)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 218, in _try_raise
raise Exception(f"{title}\n{body}") from e
Exception: [500] Internal Server Error
Internal error while making inference request`
The text was updated successfully, but these errors were encountered:
Hi @dsbyprateekg thanks for reporting this!
There seems to be some stability issues in the server backend which causes this.
Are you able to reproduce this error consistently for all queries?
Hi @dsbyprateekg thanks for reporting this! There seems to be some stability issues in the server backend which causes this. Are you able to reproduce this error consistently for all queries?
Yes @shubhadeepd . Error is reproducible for all types of input.
Hi,
After uploading a PDF, I am able to see the screen below-
But it is showing error while returning a response like below-
Traceback (most recent call last): File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\_common.py", line 203, in _try_raise response.raise_for_status() File "d:\nvidia_learning\llm-env\lib\site-packages\requests\models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions/8f4118ba-60a8-4e6b-8574-e38a4067a4a3
Is this from the endpoint? Please suggest how to resolve this?
Complete logs are-
`The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\nvidia_learning\llm-env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.dict)
File "D:\NVIDIA_Learning\nvidia_streamlit_llm__main.py", line 126, in
for response in chain.stream({"input": augmented_user_input}):
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 2375, in _transform
for output in final_pipeline:
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\output_parsers\transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1473, in _transform_stream_with_config
final_input: Optional[Input] = next(input_for_tracing, None)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\runnables\base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 249, in stream
raise e
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_core\language_models\chat_models.py", line 233, in stream
for chunk in self._stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints\chat_models.py", line 123, in _stream
for response in self.get_stream(
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 484, in get_stream
return self.client.get_req_stream(self.model, stop=stop, payload=payload)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 371, in get_req_stream
self._try_raise(response)
File "d:\nvidia_learning\llm-env\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 218, in _try_raise
raise Exception(f"{title}\n{body}") from e
Exception: [500] Internal Server Error
Internal error while making inference request`
The text was updated successfully, but these errors were encountered: