2025年3月27日 星期四 甲辰(龙)年 月廿六 设为首页 加入收藏
rss
您当前的位置:首页 > 计算机 > 系统应用 > macOS

m2安装stable-diffusion报错及解决方案

时间:03-05来源:作者:点击数:83
背景:自己的电脑是macbookpro m2,之前装很多软件都遇到各种问题,不出意外,装这个stable-diffusion一样遇到各种问题,现在总结一下安装过程中遇到的问题。
报错一:
  • Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
  • no module 'xformers'. Processing without...
  • No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.12.1. You might want to consider upgrading.
  • no module 'xformers'. Processing without...
  • No module 'xformers'. Proceeding without it.
  • Style database not found: /Users/wuzhanxi/stable-diffusion-webui/styles.csv
  • Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
  • ==============================================================================
  • You are running torch 1.12.1.
  • The program is tested to work with torch 2.0.0.
  • To reinstall the desired version, run with commandline flag --reinstall-torch.
  • Beware that this will cause a lot of large files to be downloaded, as well as
  • there are reports of issues with training tab on the latest version.
  • Use --skip-version-check commandline argument to disable this check.
  • ==============================================================================
  • Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /Users/wuzhanxi/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
  • WARNING:modules.mac_specific:MPS garbage collection failed
  • Traceback (most recent call last):
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/mac_specific.py", line 38, in torch_mps_gc
  • from torch.mps import empty_cache
  • ModuleNotFoundError: No module named 'torch.mps'

原因:torch 的版本不对

解决方案:在安装目录stable-diffusion-webui 下,编辑webui-macos-env.sh 文件

我的原来的配置文件如下;

  • export install_dir="$HOME"
  • export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
  • export TORCH_COMMAND="pip install torch==1.12.1 torchvision==0.13.1"
  • export K_DIFFUSION_REPO="https://github.com/brkirch/k-diffusion.git"
  • export K_DIFFUSION_COMMIT_HASH="51c9778f269cedb55a4d88c79c0246d35bdadb71"
  • export PYTORCH_ENABLE_MPS_FALLBACK=1

修改好的配置文件如下;1.在COMMANDLINE_ARGS 参数的后面添加了–reinstall-torch

2.export TORCH_COMMAND=“pip install torch2.0.1 torchvision0.15.2”,将torch的版本修改成2.0的

  • export install_dir="$HOME"
  • export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate --reinstall-torch"
  • export TORCH_COMMAND="pip install torch==2.0.1 torchvision==0.15.2"
  • #export TORCH_COMMAND="pip install torch==1.12.1 torchvision==0.13.1"
  • export K_DIFFUSION_REPO="https://github.com/brkirch/k-diffusion.git"
  • export K_DIFFUSION_COMMIT_HASH="51c9778f269cedb55a4d88c79c0246d35bdadb71"
  • export PYTORCH_ENABLE_MPS_FALLBACK=1
报错二:
  • Running on local URL: http://127.0.0.1:7860
  • To create a public link, set `share=True` in `launch()`.
  • Startup time: 4.4s (prepare environment: 0.9s, import torch: 1.2s, import gradio: 0.4s, setup paths: 0.5s, other imports: 0.4s, load scripts: 0.3s, create ui: 0.2s, gradio launch: 0.3s).
  • Creating model from config: /Users/wuzhanxi/stable-diffusion-webui/configs/v1-inference.yaml
  • creating model quickly: OSError
  • Traceback (most recent call last):
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
  • self._bootstrap_inner()
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
  • self.run()
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
  • self._target(*self._args, **self._kwargs)
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
  • shared.sd_model # noqa: B018
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/shared_items.py", line 128, in sd_model
  • return modules.sd_models.model_data.get_sd_model()
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/sd_models.py", line 531, in get_sd_model
  • load_model()
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/sd_models.py", line 634, in load_model
  • sd_model = instantiate_from_config(sd_config.model)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
  • return get_obj_from_str(config["target"])(**config.get("params", dict()))
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
  • self.instantiate_cond_stage(cond_stage_config)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
  • model = instantiate_from_config(config)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
  • return get_obj_from_str(config["target"])(**config.get("params", dict()))
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 104, in __init__
  • self.tokenizer = CLIPTokenizer.from_pretrained(version)
  • File "/opt/homebrew/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
  • raise EnvironmentError(
  • OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
  • Failed to create model quickly; will retry using slow method.
  • loading stable diffusion model: OSError
  • Traceback (most recent call last):
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
  • self._bootstrap_inner()
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
  • self.run()
  • File "/opt/homebrew/Cellar/python@3.10/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
  • self._target(*self._args, **self._kwargs)
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/initialize.py", line 147, in load_model
  • shared.sd_model # noqa: B018
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/shared_items.py", line 128, in sd_model
  • return modules.sd_models.model_data.get_sd_model()
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/sd_models.py", line 531, in get_sd_model
  • load_model()
  • File "/Users/wuzhanxi/stable-diffusion-webui/modules/sd_models.py", line 643, in load_model
  • sd_model = instantiate_from_config(sd_config.model)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
  • return get_obj_from_str(config["target"])(**config.get("params", dict()))
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
  • self.instantiate_cond_stage(cond_stage_config)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
  • model = instantiate_from_config(config)
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
  • return get_obj_from_str(config["target"])(**config.get("params", dict()))
  • File "/Users/wuzhanxi/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 104, in __init__
  • self.tokenizer = CLIPTokenizer.from_pretrained(version)
  • File "/opt/homebrew/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
  • raise EnvironmentError(
  • OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

原因:clip-vit-large-patch14 国内已经不能访问了

需要手动创建openai 目录并把 下载后解压的资源拖入到openai目录下面

安装sd的时候,本地你自己的目录下会同时生成一个sd的目录,我的是这样的

/Users/wuzhanxi/stable-diffusion-webui,在这个目录下手动创建一个openai目录

可以用的模型的代码:

链接: https://pan.baidu.com/s/1EBptJ2v9inq9A5LEYFfBMg 提取码: dh2b

报错三: Couldn’t install gfpgan.

在执行完 ./webui.sh 后报 “RuntimeError: Couldn’t install gfpgan.”

其实是因为没有下载下来 gfpgan。我们可以从报错信息中找到下载链接,如下图中横线部分。

然后 copy 这个链接到浏览器中进行下载,然后将下载好的 zip 文件加压,并将目录名改成 GFPGAN,然后拖到 stable-diffusion-webui 这个目录下就好了。

当再次执行 ./webui.sh 时,发现已经跨过下载 GFPGAN,继续下载其他依赖了。

方便获取更多学习、工作、生活信息请关注本站微信公众号城东书院 微信服务号城东书院 微信订阅号
推荐内容
相关内容
栏目更新
栏目热门