Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running run_prewriting.py script with gpt-3.5-turbo #19

Open
NasonZ opened this issue Apr 16, 2024 · 4 comments
Open

Error running run_prewriting.py script with gpt-3.5-turbo #19

NasonZ opened this issue Apr 16, 2024 · 4 comments

Comments

@NasonZ
Copy link

NasonZ commented Apr 16, 2024

Description:
I encountered an error when trying to run the run_prewriting.py script with the gpt-3.5-turbo engine.

I followed the setup instructions in the README, including:

  • Creating and activating a conda environment
  • Installing required packages

To my error Reproduce:

$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5 --do-research

$ Topic: The promise and technical difficulties of SSTO vehicles

$ Ground truth url (will be excluded from source):  #blank

root : ERROR    : Error occurs when processing h: Invalid URL 'h': No scheme supplied. Perhaps you meant https://h?
root : ERROR    : Error occurs when processing t: Invalid URL 't': No scheme supplied. Perhaps you meant https://t?
root : ERROR    : Error occurs when processing n: Invalid URL 'n': No scheme supplied. Perhaps you meant https://n?
root : ERROR    : Error occurs when processing n: Invalid URL 'n': No scheme supplied. Perhaps you meant https://n?
root : ERROR    : Error occurs when processing m: Invalid URL 'm': No scheme supplied. Perhaps you meant https://m?
engine : INFO     : _research_topic executed in 85.8730 seconds
openai : INFO     : error_code=None error_message='Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "Z:\Projects\storm\src\scripts\run_prewriting.py", line 96, in <module>
    main(parser.parse_args())
  File "Z:\Projects\storm\src\scripts\run_prewriting.py", line 54, in main
    runner.run(topic=topic,
  File "Z:\Projects\storm\src\engine.py", line 405, in run
    outline = self._generate_outline(topic, conversations, callback_handler)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\Projects\storm\src\engine.py", line 26, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "Z:\Projects\storm\src\engine.py", line 222, in _generate_outline
    result = write_outline(topic=topic, dlg_history=sum(conversations, []), callback_handler=callback_handler)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dspy\primitives\program.py", line 29, in __call__
    return self.forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\Projects\storm\src\modules\write_page.py", line 179, in forward
    old_outline = clean_up_outline(self.draft_page_outline(topic=topic).outline)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dspy\predict\predict.py", line 60, in __call__
    return self.forward(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dspy\predict\predict.py", line 87, in forward
    x, C = dsp.generate(signature, **config)(x, stage=self.stage)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\primitives\predict.py", line 78, in do_generate
    completions: list[dict[str, Any]] = generator(prompt, **kwargs)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\Projects\storm\src\modules\utils.py", line 73, in __call__
    response = self.request(prompt, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\backoff\_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\gpt3.py", line 136, in request
    return self.basic_request(prompt, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\gpt3.py", line 109, in basic_request
    response = chat_request(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\gpt3.py", line 247, in chat_request
    return _cached_gpt3_turbo_request_v2_wrapped(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\cache_utils.py", line 17, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\gpt3.py", line 221, in _cached_gpt3_turbo_request_v2_wrapped
    return _cached_gpt3_turbo_request_v2(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\joblib\memory.py", line 655, in __call__
    return self._cached_call(args, kwargs)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\joblib\memory.py", line 598, in _cached_call
    out, metadata = self.call(*args, **kwargs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\joblib\memory.py", line 856, in call
    output = self.func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\dsp\modules\gpt3.py", line 216, in _cached_gpt3_turbo_request_v2
    return cast(OpenAIObject, openai.ChatCompletion.create(**kwargs))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\openai\api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\me\anaconda3\envs\storm\Lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions)


$ python -m scripts.run_writing --input-source console --engine gpt-35-turbo --do-polish-article --remove-duplicate
Topic: The promise and technical difficulties of SSTO vehicles
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "Z:\Projects\storm\src\scripts\run_writing.py", line 94, in <module>
    main(parser.parse_args())
  File "Z:\Projects\storm\src\scripts\run_writing.py", line 54, in main
    runner.run(topic=topic,
  File "Z:\Projects\storm\src\engine.py", line 410, in run
    url_to_info = load_json(
                  ^^^^^^^^^^
  File "Z:\Projects\storm\src\modules\utils.py", line 326, in load_json
    with open(file_name, 'r', encoding=encoding) as fr:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../results\\The_promise_and_technical_difficulties_of_SSTO_vehicles\\raw_search_results.json'

(storm) Z:\Projects\storm\src>

Environment:

Python version: 3.11
Operating System: Windows

Summary:
Got a a few errors when running run_prewriting, despite these errors the script does product a conversation_log.json and raw_search_results.json which look ok.

I then try to run run_writing but this completely fails due to FileNotFoundError: [Errno 2] No such file or directory: '../results\\The_promise_and_technical_difficulties_of_SSTO_vehicles\\raw_search_results.json',

Could aynone please advise on how to resolve these errors? Let me know if any additional information or logs would be helpful for troubleshooting.

@shaoyijia
Copy link
Collaborator

Could you try to change --output-dir to something like results?

Based on the log, ../results\\The_promise_and_technical_difficulties_of_SSTO_vehicles\\raw_search_results.json looks incorrect. We set the default value of --output-dir as ../results in the scripts but this relative path is not friendly for Windows. (Thanks for providing the OS information!)

@NasonZ
Copy link
Author

NasonZ commented Apr 16, 2024

I've switched over to my linux machine but I'm now getting a slightly different error, there is no storm_gen_outline.txt produced by the run_prewriting script. direct_gen_outline.txt also is not produced.

This leads to FileNotFoundError: [Errno 2] No such file or directory: '../results/The_promise_and_technical_difficulties_of_SSTO_vehicles/storm_gen_outline.txt' when running python -m scripts.run_writing --input-source console --engine gpt-35-turbo --do-polish-article --remove-duplicate

To Reproduce my error:

(storm) me@me-MS:~/Prototypes/graphs/storm/src$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5 --do-research
Topic: The promise and technical difficulties of SSTO vehicles
Ground truth url (will be excluded from source): 
engine : INFO     : _research_topic executed in 84.4190 seconds
openai : INFO     : error_code=None error_message='Invalid URL (POST /v1/engines/gpt-35-turbo-16k/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 96, in <module>
    main(parser.parse_args())
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 54, in main
    runner.run(topic=topic,
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 405, in run
    outline = self._generate_outline(topic, conversations, callback_handler)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 26, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 222, in _generate_outline
    result = write_outline(topic=topic, dlg_history=sum(conversations, []), callback_handler=callback_handler)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/primitives/program.py", line 29, in __call__
    return self.forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/modules/write_page.py", line 179, in forward
    old_outline = clean_up_outline(self.draft_page_outline(topic=topic).outline)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 60, in __call__
    return self.forward(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 87, in forward
    x, C = dsp.generate(signature, **config)(x, stage=self.stage)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/primitives/predict.py", line 78, in do_generate
    completions: list[dict[str, Any]] = generator(prompt, **kwargs)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 73, in __call__
    response = self.request(prompt, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/backoff/_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 136, in request
    return self.basic_request(prompt, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 109, in basic_request
    response = chat_request(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 247, in chat_request
    return _cached_gpt3_turbo_request_v2_wrapped(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/cache_utils.py", line 17, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 221, in _cached_gpt3_turbo_request_v2_wrapped
    return _cached_gpt3_turbo_request_v2(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 655, in __call__
    return self._cached_call(args, kwargs)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 598, in _cached_call
    out, metadata = self.call(*args, **kwargs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 856, in call
    output = self.func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 216, in _cached_gpt3_turbo_request_v2
    return cast(OpenAIObject, openai.ChatCompletion.create(**kwargs))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-35-turbo-16k/chat/completions)
(storm) me@me-MS-7C56:~/Prototypes/graphs/storm/src$ python -m scripts.run_writing --input-source console --engine gpt-35-turbo --do-polish-article --remove-duplicate
Topic: The promise and technical difficulties of SSTO vehicles
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_writing.py", line 94, in <module>
    main(parser.parse_args())
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_writing.py", line 54, in main
    runner.run(topic=topic,
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 413, in run
    outline = load_str(os.path.join(self.args.output_dir, self.article_dir_name, 'storm_gen_outline.txt'))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 317, in load_str
    with open(path, 'r') as f:
         ^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../results/The_promise_and_technical_difficulties_of_SSTO_vehicles/storm_gen_outline.txt'



(storm) me@me-MS-7C56:~/Prototypes/graphs/storm$ tree -L 2
.
├── assets
│   ├── overview.png
│   └── two_stages.jpg
├── eval
│   ├── citation_quality.py
│   ├── eval_article_quality.py
│   ├── eval_outline_quality.py
│   ├── eval_rubric_5.json
│   ├── evaluation_prometheus.py
│   ├── evaluation_trim_length.py
│   └── metrics.py
├── FreshWiki
│   ├── json
│   ├── topic_list.csv
│   ├── txt
│   └── wikipage_extractor.py
├── LICENSE
├── README.md
├── requirements.txt
├── results
│   └── The_promise_and_technical_difficulties_of_SSTO_vehicles
│       ├── conversation_log.json
│       └── raw_search_results.json
├── secrets.toml
└── src
    ├── assertion.log
    ├── azure_openai_usage.log
    ├── engine.py
    ├── modules
    ├── openai_usage.log
    ├── __pycache__
    └── scripts

Environment:

Python version: 3.11
Operating System: ubuntu 22.04

@Yucheng-Jiang
Copy link
Collaborator

@NasonZ Could you try another api endpoint listed here: https://platform.openai.com/docs/models/gpt-3-5-turbo?

Here's the pointer to change the engine.

if args.engine == 'gpt-35-turbo': # If args.engine == 'gpt4', use the default config.

Suspect the issue comes from failure of calling /v1/engines/gpt-35-turbo-16k/chat/completions at earlier stage.

@NasonZ
Copy link
Author

NasonZ commented Apr 16, 2024

@Yucheng-Jiang

I have adjusted the endpoint but I'm still getting the error.

storm/src$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5 
--do-research
Topic: The promise and technical difficulties of SSTO vehicles
Ground truth url (will be excluded from source): 

...

engine : INFO     : _research_topic executed in 15.5520 seconds
openai : INFO     : error_code=None error_message='Invalid URL (POST /v1/engines/gpt-3.5-turbo-0125/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 96, in <module>
    main(parser.parse_args())
  File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 54, in main
    runner.run(topic=topic,
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 405, in run
    outline = self._generate_outline(topic, conversations, callback_handler)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 26, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/engine.py", line 222, in _generate_outline
    result = write_outline(topic=topic, dlg_history=sum(conversations, []), callback_handler=callback_handler)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/primitives/program.py", line 29, in __call__
    return self.forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/modules/write_page.py", line 179, in forward
    old_outline = clean_up_outline(self.draft_page_outline(topic=topic).outline)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 60, in __call__
    return self.forward(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 87, in forward
    x, C = dsp.generate(signature, **config)(x, stage=self.stage)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/primitives/predict.py", line 78, in do_generate
    completions: list[dict[str, Any]] = generator(prompt, **kwargs)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 73, in __call__
    response = self.request(prompt, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/backoff/_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 136, in request
    return self.basic_request(prompt, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 109, in basic_request
    response = chat_request(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 247, in chat_request
    return _cached_gpt3_turbo_request_v2_wrapped(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/cache_utils.py", line 17, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 221, in _cached_gpt3_turbo_request_v2_wrapped
    return _cached_gpt3_turbo_request_v2(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 655, in __call__
    return self._cached_call(args, kwargs)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 598, in _cached_call
    out, metadata = self.call(*args, **kwargs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 856, in call
    output = self.func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 216, in _cached_gpt3_turbo_request_v2
    return cast(OpenAIObject, openai.ChatCompletion.create(**kwargs))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo-0125/chat/completions)

Also, storm_gen_outline.txt and direct_gen_outline.txt are not being produced by the run_prewriting script.
Let me know if there's anything else you'd like me to adjust/try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants