Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows下跑的很慢,但跑起来了,记录一下安装过程 #108

Open
Spr-Peach opened this issue Apr 15, 2024 · 7 comments
Open

Comments

@Spr-Peach
Copy link

先按照#11MatthewK78的说明修改requirements.txt,然后根据存放模型的路径修改CKPT_PTH.py和SUPIR_v0.yaml文件,例如我要将模型存放在项目根目录下新建的models文件夹内:
修改CKPT_PTH.py:

LLAVA_CLIP_PATH = './models/clip-vit-large-patch14-336'
LLAVA_MODEL_PATH = './models/llava-v1.5-13b'
SDXL_CLIP1_PATH = './models/clip-vit-large-patch14'
SDXL_CLIP2_CKPT_PTH = './models/open_clip_pytorch_model.bin'

修改./option/SUPIR_v0.yaml(约152行开始):

SDXL_CKPT: './models/sd_xl_base_1.0_0.9vae.safetensors'
SUPIR_CKPT_F: './models/SUPIR-v0F.ckpt'
SUPIR_CKPT_Q: './models/SUPIR-v0Q.ckpt'
SUPIR_CKPT: ~

models文件夹如下:
2

clip-vit-large-patch14clip-vit-large-patch14-336llava-v1.5-13b如果已经安装过git lfs可以从huggingface上直接git clone,然后下载open_clip_pytorch_model.binsd_xl_base_1.0_0.9vae.safetensors,最后从Google drive下载SUPIR-v0F.ckpt和SUPIR-v0Q.ckpt,总共3个文件夹和4个文件放入models下。

@Spr-Peach Spr-Peach changed the title windows下跑的很慢,但跑起来了 windows下跑的很慢,但跑起来了,记录一下安装过程 Apr 15, 2024
@CuddleSabe
Copy link

SDXL_CLIP2_CKPT_PTH = './models/open_clip_pytorch_model.bin'
根本就是无用的,,他们根本没有用到vit的任何参数来加载controlnet
不知道为什么他们要在论文里说用到vit并在加载函数里面加上这个。。实际没有起到任何作用

@ganjunhong
Copy link

您好,请教下,现存这个,您是怎么控制在8G内呢

@jiehwa
Copy link

jiehwa commented Apr 16, 2024

你好,请问下你的电脑是什么配置?这个4090能跑吗,我看硬件要求60G的显存

@Trendymen
Copy link

Trendymen commented Apr 17, 2024

You guys can't use ComfyUI-SUPIR?

@CuddleSabe
Copy link

You guys can't use ComfyUI-SUPIR?

那里面太多实现错误了

@Spr-Peach
Copy link
Author

您好,请教下,现存这个,您是怎么控制在8G内呢

好像只能添加 --no_llava,看#65

@juewangdelty
Copy link

按你的教程走出现了以下错误
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /media/caj/TEN/YJ/SUPIR-master/gradio_demo.py:44 in │
│ │
│ 41 │ raise ValueError('Currently support CUDA only.') │
│ 42 │
│ 43 # load SUPIR │
│ ❱ 44 model, default_setting = create_SUPIR_model(args.opt, SUPIR_sign='Q', │
│ 45 if args.loading_half_params: │
│ 46 │ model = model.half() │
│ 47 if args.use_tile_vae: │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/SUPIR/util.py:36 in create_SUPIR_model │
│ │
│ 33 │
│ 34 def create_SUPIR_model(config_path, SUPIR_sign=None, load_default_sett │
│ 35 │ config = OmegaConf.load(config_path) │
│ ❱ 36 │ model = instantiate_from_config(config.model).cpu() │
│ 37 │ print(f'Loaded model config from [{config_path}]') │
│ 38 │ if config.SDXL_CKPT is not None: │
│ 39 │ │ model.load_state_dict(load_state_dict(config.SDXL_CKPT), stric │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/util.py:175 in instantiate_from_config │
│ │
│ 172 │ │ elif config == "is_unconditional": │
│ 173 │ │ │ return None │
│ 174 │ │ raise KeyError("Expected key target to instantiate.") │
│ ❱ 175 │ return get_obj_from_str(config["target"])(**config.get("params", d │
│ 176 │
│ 177 │
│ 178 def get_obj_from_str(string, reload=False, invalidate_cache=True): │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/SUPIR/models/SUPIR_model.py:14 in init │
│ │
│ 11 │
│ 12 class SUPIRModel(DiffusionEngine): │
│ 13 │ def init(self, control_stage_config, ae_dtype='fp32', diffusio │
│ ❱ 14 │ │ super().init(*args, **kwargs) │
│ 15 │ │ control_model = instantiate_from_config(control_stage_config) │
│ 16 │ │ self.model.load_control_model(control_model) │
│ 17 │ │ self.first_stage_model.denoise_encoder = copy.deepcopy(self.fi │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/models/diffusion.py:61 in init │
│ │
│ 58 │ │ │ if sampler_config is not None │
│ 59 │ │ │ else None │
│ 60 │ │ ) │
│ ❱ 61 │ │ self.conditioner = instantiate_from_config( │
│ 62 │ │ │ default(conditioner_config, UNCONDITIONAL_CONFIG) │
│ 63 │ │ ) │
│ 64 │ │ self.scheduler_config = scheduler_config │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/util.py:175 in instantiate_from_config │
│ │
│ 172 │ │ elif config == "is_unconditional": │
│ 173 │ │ │ return None │
│ 174 │ │ raise KeyError("Expected key target to instantiate.") │
│ ❱ 175 │ return get_obj_from_str(config["target"])(**config.get("params", d │
│ 176 │
│ 177 │
│ 178 def get_obj_from_str(string, reload=False, invalidate_cache=True): │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/modules/encoders/modules.py:89 in │
│ init │
│ │
│ 86 │ │ super().init() │
│ 87 │ │ embedders = [] │
│ 88 │ │ for n, embconfig in enumerate(emb_models): │
│ ❱ 89 │ │ │ embedder = instantiate_from_config(embconfig) │
│ 90 │ │ │ assert isinstance( │
│ 91 │ │ │ │ embedder, AbstractEmbModel │
│ 92 │ │ │ ), f"embedder model {embedder.class.name} has to │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/util.py:175 in instantiate_from_config │
│ │
│ 172 │ │ elif config == "is_unconditional": │
│ 173 │ │ │ return None │
│ 174 │ │ raise KeyError("Expected key target to instantiate.") │
│ ❱ 175 │ return get_obj_from_str(config["target"])(**config.get("params", d │
│ 176 │
│ 177 │
│ 178 def get_obj_from_str(string, reload=False, invalidate_cache=True): │
│ │
│ /media/caj/TEN/YJ/SUPIR-master/sgm/modules/encoders/modules.py:462 in │
│ init │
│ │
│ 459 │ ): # clip-vit-base-patch32 │
│ 460 │ │ super().init() │
│ 461 │ │ assert layer in self.LAYERS │
│ ❱ 462 │ │ self.tokenizer = CLIPTokenizer.from_pretrained(version if SDX │
│ 463 │ │ self.transformer = CLIPTextModel.from_pretrained(version if S │
│ 464 │ │ self.device = device │
│ 465 │ │ self.max_length = max_length │
│ │
│ /home/caj/anaconda3/envs/Diff4R/lib/python3.8/site-packages/transformers/tok │
│ enization_utils_base.py:1770 in from_pretrained │
│ │
│ 1767 │ │ │ │ elif is_remote_url(file_path): │
│ 1768 │ │ │ │ │ resolved_vocab_files[file_id] = download_url(file │
│ 1769 │ │ │ else: │
│ ❱ 1770 │ │ │ │ resolved_vocab_files[file_id] = cached_file( │
│ 1771 │ │ │ │ │ pretrained_model_name_or_path, │
│ 1772 │ │ │ │ │ file_path, │
│ 1773 │ │ │ │ │ cache_dir=cache_dir, │
│ │
│ /home/caj/anaconda3/envs/Diff4R/lib/python3.8/site-packages/transformers/uti │
│ ls/hub.py:409 in cached_file │
│ │
│ 406 │ user_agent = http_user_agent(user_agent) │
│ 407 │ try: │
│ 408 │ │ # Load from URL or cache if already cached │
│ ❱ 409 │ │ resolved_file = hf_hub_download( │
│ 410 │ │ │ path_or_repo_id, │
│ 411 │ │ │ filename, │
│ 412 │ │ │ subfolder=None if len(subfolder) == 0 else subfolder, │
│ │
│ /home/caj/anaconda3/envs/Diff4R/lib/python3.8/site-packages/huggingface_hub/ │
│ utils/_validators.py:106 in inner_fn │
│ │
│ 103 │ │ │ kwargs.items(), # Kwargs values │
│ 104 │ │ ): │
│ 105 │ │ │ if arg_name in ["repo_id", "from_id", "to_id"]: │
│ ❱ 106 │ │ │ │ validate_repo_id(arg_value) │
│ 107 │ │ │ │
│ 108 │ │ │ elif arg_name == "token" and arg_value is not None: │
│ 109 │ │ │ │ has_token = True │
│ │
│ /home/caj/anaconda3/envs/Diff4R/lib/python3.8/site-packages/huggingface_hub/ │
│ utils/validators.py:160 in validate_repo_id │
│ │
│ 157 │ │ ) │
│ 158 │ │
│ 159 │ if not REPO_ID_REGEX.match(repo_id): │
│ ❱ 160 │ │ raise HFValidationError( │
│ 161 │ │ │ "Repo id must use alphanumeric chars or '-', '', '.', '-- │
│ 162 │ │ │ " forbidden, '-' and '.' cannot start or end the name, max │
│ 163 │ │ │ f" '{repo_id}'." │
╰──────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--'
and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
96: ''.
ununtu系统

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants