Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: AttributeError: 'NoneType' object has no attribute 'lowvram' #15793

Open
1 of 6 tasks
mclaughlin111 opened this issue May 14, 2024 · 5 comments
Open
1 of 6 tasks
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@mclaughlin111
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I've done a fresh install using this automated setup file/instruction: viking1304/a1111-setup#2

I still get AttributeError: 'NoneType' object has no attribute 'lowvram' error message in the browser when trying to generate an image or load a model

Steps to reproduce the problem

  1. install model from repo 2. run terminal commands as below

What should have happened?

generate images as expected?

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

{
"Platform": "macOS-13.6.3-x86_64-i386-64bit",
"Python": "3.10.14",
"Version": "v1.9.3",
"Commit": "1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0",
"Script path": "/Users/thomasmclaughlin/stable-diffusion-webui",
"Data path": "/Users/thomasmclaughlin/stable-diffusion-webui",
"Extensions dir": "/Users/thomasmclaughlin/stable-diffusion-webui/extensions",
"Checksum": "a71d5230d4b09eda5a5cb0ad0af4a92603e3c20d85b23a32d1aeabf66ffb2d2b",
"Commandline": [
"launch.py",
"--skip-torch-cuda-test",
"--upcast-sampling",
"--no-half-vae",
"--use-cpu",
"interrogate"
],
"Torch env info": {
"torch_version": "2.2.0",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": "12.0.0 (clang-1200.0.32.29)",
"cmake_version": "version 3.29.3",
"os": "macOS 13.6.3 (x86_64)",
"libc_version": "N/A",
"python_version": "3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)",
"python_platform": "macOS-13.6.3-x86_64-i386-64bit",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.2.0.dev20231010",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0",
"torchsde==0.2.6",
"torchvision==0.17.0.dev20231010"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": "Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz"
},
"Exceptions": [
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py, line 832, process_images",
"sd_models.reload_model_weights()"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
},
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py, line 832, process_images",
"sd_models.reload_model_weights()"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
},
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py, line 832, process_images",
"sd_models.reload_model_weights()"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
},
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py, line 832, process_images",
"sd_models.reload_model_weights()"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
},
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/options.py, line 165, set",
"option.onchange()"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py, line 13, f",
"res = func(*args, **kwargs)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/initialize_util.py, line 181, ",
"shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
}
],
"CPU": {
"model": "i386",
"count logical": 8,
"count physical": 4
},
"RAM": {
"total": "16GB",
"used": "9GB",
"free": "166MB",
"active": "6GB",
"inactive": "5GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate",
"GIT": "git",
"GRADIO_ANALYTICS_ENABLED": "False",
"TORCH_COMMAND": "pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors [6ce0161689]",
"sd_checkpoint_hash": "6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa"
},
"Startup": {
"total": 1394.5017268657684,
"records": {
"initial startup": 0.0017948150634765625,
"prepare environment/checks": 0.00012803077697753906,
"prepare environment/git version info": 0.03627800941467285,
"prepare environment/install torch": 73.99712896347046,
"prepare environment/torch GPU test": 2.8848648071289062e-05,
"prepare environment/install clip": 6.799776077270508,
"prepare environment/install open_clip": 6.3331379890441895,
"prepare environment/clone repositores": 31.969136953353882,
"prepare environment/install requirements": 51.7264130115509,
"prepare environment/run extensions installers": 0.0006480216979980469,
"prepare environment": 170.86249089241028,
"launcher": 0.007987022399902344,
"import torch": 4.932553052902222,
"import gradio": 0.7838048934936523,
"setup paths": 1.3083140850067139,
"import ldm": 0.032701969146728516,
"import sgm": 7.867813110351562e-06,
"initialize shared": 0.2062370777130127,
"other imports": 1.0896401405334473,
"opts onchange": 0.0005178451538085938,
"setup SD model": 8.702278137207031e-05,
"setup codeformer": 0.003304004669189453,
"setup gfpgan": 0.0067980289459228516,
"set samplers": 4.315376281738281e-05,
"list extensions": 0.0010807514190673828,
"restore config state file": 1.0013580322265625e-05,
"list SD models": 1211.7901101112366,
"list localizations": 0.003938913345336914,
"load scripts/custom_code.py": 0.011410236358642578,
"load scripts/img2imgalt.py": 0.009813785552978516,
"load scripts/loopback.py": 0.0044231414794921875,
"load scripts/outpainting_mk_2.py": 0.012867927551269531,
"load scripts/poor_mans_outpainting.py": 0.00356292724609375,
"load scripts/postprocessing_codeformer.py": 0.0025010108947753906,
"load scripts/postprocessing_gfpgan.py": 0.0018310546875,
"load scripts/postprocessing_upscale.py": 0.00765681266784668,
"load scripts/prompt_matrix.py": 0.0036492347717285156,
"load scripts/prompts_from_file.py": 0.00461888313293457,
"load scripts/sd_upscale.py": 0.00616908073425293,
"load scripts/xyz_grid.py": 0.02607107162475586,
"load scripts/ldsr_model.py": 0.4685349464416504,
"load scripts/lora_script.py": 0.5496799945831299,
"load scripts/scunet_model.py": 0.060070037841796875,
"load scripts/swinir_model.py": 0.06432700157165527,
"load scripts/hotkey_config.py": 0.0028319358825683594,
"load scripts/extra_options_section.py": 0.0037012100219726562,
"load scripts/hypertile_script.py": 0.12613582611083984,
"load scripts/hypertile_xyz.py": 0.00042891502380371094,
"load scripts/postprocessing_autosized_crop.py": 0.0031549930572509766,
"load scripts/postprocessing_caption.py": 0.00144195556640625,
"load scripts/postprocessing_create_flipped_copies.py": 0.001435995101928711,
"load scripts/postprocessing_focal_crop.py": 0.011153221130371094,
"load scripts/postprocessing_split_oversized.py": 0.002106904983520508,
"load scripts/soft_inpainting.py": 0.006776094436645508,
"load scripts/comments.py": 0.055504798889160156,
"load scripts/refiner.py": 0.002582073211669922,
"load scripts/sampler.py": 0.002046823501586914,
"load scripts/seed.py": 0.0026912689208984375,
"load scripts": 1.4592630863189697,
"load upscalers": 0.011837005615234375,
"refresh VAE": 0.0022759437561035156,
"refresh textual inversion templates": 0.0009620189666748047,
"scripts list_optimizers": 0.0008859634399414062,
"scripts list_unets": 4.6253204345703125e-05,
"reload hypernetworks": 0.0015668869018554688,
"initialize extra networks": 0.02116107940673828,
"scripts before_ui_callback": 0.0008487701416015625,
"create ui": 1.0236432552337646,
"gradio launch": 0.9025290012359619,
"add APIs": 0.04110574722290039,
"app_started_callback/lora_script.py": 0.004281044006347656,
"app_started_callback": 0.0043032169342041016
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.5",
"aiosignal==1.3.1",
"altair==5.3.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==23.2.0",
"blendmodes==2022",
"certifi==2024.2.2",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"colorama==0.4.6",
"contourpy==1.2.1",
"cycler==0.12.1",
"deprecation==2.1.0",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.1",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.2",
"filelock==3.13.1",
"filterpy==1.4.5",
"fonttools==4.51.0",
"frozenlist==1.4.1",
"fsspec==2024.2.0",
"ftfy==6.2.0",
"gitdb==4.0.11",
"gitpython==3.1.32",
"gradio-client==0.5.0",
"gradio==3.41.2",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.23.0",
"idna==3.7",
"imageio==2.34.1",
"importlib-resources==6.4.0",
"inflection==0.5.1",
"jinja2==3.1.3",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.12.1",
"jsonschema==4.22.0",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.4",
"lightning-utilities==0.11.2",
"llvmlite==0.42.0",
"markupsafe==2.1.5",
"matplotlib==3.8.4",
"mpmath==1.2.1",
"multidict==6.0.5",
"networkx==3.2.1",
"numba==0.59.1",
"numpy==1.26.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-python==4.9.0.80",
"orjson==3.10.3",
"packaging==24.0",
"pandas==2.2.2",
"piexif==1.1.3",
"pillow-avif-plugin==1.4.3",
"pillow==9.5.0",
"pip==24.0",
"pretty-errors==1.2.25",
"protobuf==3.20.0",
"psutil==5.9.5",
"pydantic==1.10.15",
"pydub==0.25.1",
"pyparsing==3.1.2",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"pywavelets==1.6.0",
"pyyaml==6.0.1",
"referencing==0.35.1",
"regex==2024.5.10",
"requests==2.31.0",
"resize-right==0.0.2",
"rpds-py==0.18.1",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.13.0",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.2.0",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.1.6",
"starlette==0.26.1",
"sympy==1.12",
"tifffile==2024.5.10",
"timm==0.9.16",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"toolz==0.12.1",
"torch==2.2.0.dev20231010",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0",
"torchsde==0.2.6",
"torchvision==0.17.0.dev20231010",
"tqdm==4.66.4",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing-extensions==4.8.0",
"tzdata==2024.1",
"urllib3==2.2.1",
"uvicorn==0.29.0",
"wcwidth==0.2.13",
"websockets==11.0.3",
"yarl==1.9.4"
]
}

Console logs

Launching webui...

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on thomasmclaughlin user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Installing torch and torchvision
Looking in indexes: https://download.pytorch.org/whl/nightly/cpu
Collecting torch
  Downloading https://download.pytorch.org/whl/nightly/cpu/torch-2.2.0.dev20231010-cp310-none-macosx_10_9_x86_64.whl (147.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.6/147.6 MB 4.1 MB/s eta 0:00:00
Collecting torchvision
  Downloading https://download.pytorch.org/whl/nightly/cpu/torchvision-0.17.0.dev20231010-cp310-cp310-macosx_10_13_x86_64.whl (1.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 4.7 MB/s eta 0:00:00
Collecting filelock (from torch)
  Downloading https://download.pytorch.org/whl/nightly/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting typing-extensions (from torch)
  Downloading https://download.pytorch.org/whl/nightly/typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Collecting sympy (from torch)
  Downloading https://download.pytorch.org/whl/nightly/sympy-1.12-py3-none-any.whl (5.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 4.9 MB/s eta 0:00:00
Collecting networkx (from torch)
  Downloading https://download.pytorch.org/whl/nightly/networkx-3.2.1-py3-none-any.whl (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 4.7 MB/s eta 0:00:00
Collecting jinja2 (from torch)
  Downloading https://download.pytorch.org/whl/nightly/Jinja2-3.1.3-py3-none-any.whl (133 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 2.7 MB/s eta 0:00:00
Collecting fsspec (from torch)
  Downloading https://download.pytorch.org/whl/nightly/fsspec-2024.2.0-py3-none-any.whl (170 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 170.9/170.9 kB 3.0 MB/s eta 0:00:00
Collecting numpy (from torchvision)
  Downloading https://download.pytorch.org/whl/nightly/numpy-1.26.4-cp310-cp310-macosx_10_9_x86_64.whl (20.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.6/20.6 MB 4.9 MB/s eta 0:00:00
Collecting requests (from torchvision)
  Downloading https://download.pytorch.org/whl/nightly/requests-2.31.0-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 1.8 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Downloading https://download.pytorch.org/whl/nightly/Pillow-9.3.0-cp310-cp310-macosx_10_10_x86_64.whl (3.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 4.9 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Downloading https://download.pytorch.org/whl/nightly/MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_x86_64.whl (14 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Downloading https://download.pytorch.org/whl/nightly/charset_normalizer-3.3.2-cp310-cp310-macosx_10_9_x86_64.whl (122 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 122.5/122.5 kB 2.4 MB/s eta 0:00:00
Collecting idna<4,>=2.5 (from requests->torchvision)
  Downloading https://download.pytorch.org/whl/nightly/idna-3.7-py3-none-any.whl (66 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.8/66.8 kB 1.8 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Downloading https://download.pytorch.org/whl/nightly/urllib3-2.2.1-py3-none-any.whl (121 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.1/121.1 kB 984.8 kB/s eta 0:00:00
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Downloading https://download.pytorch.org/whl/nightly/certifi-2024.2.2-py3-none-any.whl (163 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 163.8/163.8 kB 3.0 MB/s eta 0:00:00
Collecting mpmath>=0.19 (from sympy->torch)
  Downloading https://download.pytorch.org/whl/nightly/mpmath-1.2.1-py3-none-any.whl (532 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 532.6/532.6 kB 4.2 MB/s eta 0:00:00
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.7 jinja2-3.1.3 mpmath-1.2.1 networkx-3.2.1 numpy-1.26.4 pillow-9.3.0 requests-2.31.0 sympy-1.12 torch-2.2.0.dev20231010 torchvision-0.17.0.dev20231010 typing-extensions-4.8.0 urllib3-2.2.1
Installing clip
Installing open_clip
Cloning assets into /Users/thomasmclaughlin/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
Cloning into '/Users/thomasmclaughlin/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 8.29 MiB/s, done.
Cloning Stable Diffusion into /Users/thomasmclaughlin/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/Users/thomasmclaughlin/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (306/306), done.
remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9
Receiving objects: 100% (580/580), 73.44 MiB | 4.87 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into /Users/thomasmclaughlin/stable-diffusion-webui/repositories/generative-models...
Cloning into '/Users/thomasmclaughlin/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects: 100% (941/941), 43.85 MiB | 4.87 MiB/s, done.
Resolving deltas: 100% (490/490), done.
Cloning K-diffusion into /Users/thomasmclaughlin/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/Users/thomasmclaughlin/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1345, done.
remote: Counting objects: 100% (1345/1345), done.
remote: Compressing objects: 100% (434/434), done.
remote: Total 1345 (delta 944), reused 1264 (delta 904), pack-reused 0
Receiving objects: 100% (1345/1345), 239.04 KiB | 1.41 MiB/s, done.
Resolving deltas: 100% (944/944), done.
Cloning BLIP into /Users/thomasmclaughlin/stable-diffusion-webui/repositories/BLIP...
Cloning into '/Users/thomasmclaughlin/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 4.88 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /Users/thomasmclaughlin/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors

100%|██████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [20:11<00:00, 3.52MB/s]
Calculating sha256 for /Users/thomasmclaughlin/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 1394.5s (prepare environment: 170.9s, import torch: 4.9s, import gradio: 0.8s, setup paths: 1.3s, initialize shared: 0.2s, other imports: 1.1s, list SD models: 1211.8s, load scripts: 1.5s, create ui: 1.0s, gradio launch: 0.9s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /Users/thomasmclaughlin/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /Users/thomasmclaughlin/stable-diffusion-webui/configs/v1-inference.yaml
/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: AssertionError
Traceback (most recent call last):
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 770, in load_model
    with devices.autocast(), torch.no_grad():
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 218, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 769, in current_device
    _lazy_init()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled


Stable diffusion model failed to load
changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors: AttributeError
Traceback (most recent call last):
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

*** Error completing request
*** Arguments: ('task(81zrcszrkhvbhsb)', <gradio.routes.Request object at 0x167751db0>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

---
*** Error completing request
*** Arguments: ('task(830it2y4e4bv7cc)', <gradio.routes.Request object at 0x16787f820>, 'tewst', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

---
*** Error completing request
*** Arguments: ('task(01eqylyyp5fjkmj)', <gradio.routes.Request object at 0x166d9cb20>, 'test', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

---
*** Error completing request
*** Arguments: ('task(cezri1uh4ecc63n)', <gradio.routes.Request object at 0x167762020>, 'test', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

---

Additional information

No response

@mclaughlin111 mclaughlin111 added the bug-report Report of a bug, yet to be confirmed label May 14, 2024
@yaohuiwu
Copy link

To walk around the issue, you need:

  1. Stop the webui;
  2. Delete the v1-5-pruned-emaonly.safetensors file (in the dir /path/to/stable-diffusion-webui/models/Stable-diffusion) which you downloaded;
  3. Start the webui again to let the webui download the v1-5-pruned-emaonly.safetensors file itself.

I don't know why the file downloaded by webui itself is ok but mine is not. I even checked the SHA256 checksum. It's so wired and waste me a lot of time.

@mclaughlin111
Copy link
Author

followed these steps: exactly same error
Screenshot 2024-05-15 at 10 37 08 AM

100%|██████████████████████████████████████| 3.97G/3.97G [15:08<00:00, 4.69MB/s]
Calculating sha256 for /Users/thomasmclaughlin/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 920.8s (prepare environment: 0.2s, import torch: 5.7s, import gradio: 1.2s, setup paths: 1.6s, initialize shared: 0.1s, other imports: 1.2s, list SD models: 909.1s, load scripts: 0.7s, create ui: 0.4s, gradio launch: 0.5s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /Users/thomasmclaughlin/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /Users/thomasmclaughlin/stable-diffusion-webui/configs/v1-inference.yaml
/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: AssertionError
Traceback (most recent call last):
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 770, in load_model
    with devices.autocast(), torch.no_grad():
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 218, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 769, in current_device
    _lazy_init()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled


Stable diffusion model failed to load
changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors: AttributeError
Traceback (most recent call last):
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

*** Error completing request
*** Arguments: ('task(q0r8omuzr5jxt25)', <gradio.routes.Request object at 0x15fa8f8b0>, 'test', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/processing.py", line 832, in process_images
        sd_models.reload_model_weights()
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "/Users/thomasmclaughlin/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
        if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

---

@yaohuiwu
Copy link

It seems that it may not be due to file integrity. I have no idea then. Good luck with you !

@ceeyang
Copy link

ceeyang commented May 17, 2024

When I completely reinstalled the project, it worked for the first time, but when I added other models, the project would throw this exception,

I found that after I deleted the models I added manually, the project could run normally again;

@100-gusenits
Copy link

I removed all models, fresh install, trying to switch to default model (as it was pulled again) and see
changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors: AttributeError
Traceback (most recent call last):
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/initialize_util.py", line 181, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

4 participants