Commit Graph

  • 4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI comfyanonymous 2023-07-20 00:34:54 -04:00
  • ccb6b70de1 Move image encoding outside of sampling loop for better preview perf. comfyanonymous 2023-07-19 17:37:27 -04:00
  • 39c58b227f Disable cuda malloc on GTX 750 Ti. comfyanonymous 2023-07-19 15:14:10 -04:00
  • d5c0765f4e Update how to get the prompt in api format in the example. comfyanonymous 2023-07-19 15:07:12 -04:00
  • 799c08a4ce Auto disable cuda malloc on some GPUs on windows. comfyanonymous 2023-07-19 14:43:55 -04:00
  • 0b284f650b Fix typo. comfyanonymous 2023-07-19 10:20:32 -04:00
  • e032ca6138 Fix ddim issue with older torch versions. comfyanonymous 2023-07-19 10:16:00 -04:00
  • 18885f803a Add MX450 and MX550 to list of cards with broken fp16. comfyanonymous 2023-07-19 03:08:30 -04:00
  • 9ba440995a It's actually possible to torch.compile the unet now. comfyanonymous 2023-07-18 21:36:35 -04:00
  • 51d5477579 Add key to indicate checkpoint is v_prediction when saving. comfyanonymous 2023-07-18 00:25:53 -04:00
  • ff6b047a74 Fix device print on old torch version. comfyanonymous 2023-07-17 15:18:58 -04:00
  • 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up. comfyanonymous 2023-07-17 14:40:29 -04:00
  • 55d0fca9fa --windows-standalone-build now enables --cuda-malloc comfyanonymous 2023-07-17 14:10:36 -04:00
  • 1679abd86d Add a command line argument to enable backend:cudaMallocAsync comfyanonymous 2023-07-17 11:00:14 -04:00
  • 3a150bad15 Only calculate randn in some samplers when it's actually being used. comfyanonymous 2023-07-17 10:11:08 -04:00
  • ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. comfyanonymous 2023-07-17 09:35:19 -04:00
  • 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. comfyanonymous 2023-07-17 01:22:12 -04:00
  • ac9c038ac2 Merge branch 'master' of https://github.com/ComfyUI-Community/ComfyUI comfyanonymous 2023-07-16 03:04:45 -04:00
  • 5f57362613 Lower lora ram usage when in normal vram mode. comfyanonymous 2023-07-16 02:48:09 -04:00
  • a8f3bbc35d Patch del self.loaded_lora to prevent error with persistent lora_name swapping ComfyUI-Community 2023-07-15 17:11:12 -07:00
  • 490771b7f4 Speed up lora loading a bit. comfyanonymous 2023-07-15 13:24:05 -04:00
  • 50b1180dde Fix CLIPSetLastLayer not reverting when removed. comfyanonymous 2023-07-15 01:10:33 -04:00
  • 6fb084f39d Reduce floating point rounding errors in loras. comfyanonymous 2023-07-15 00:45:38 -04:00
  • 91ed2815d5 Add a node to merge CLIP models. comfyanonymous 2023-07-14 02:37:30 -04:00
  • 907c9fbf0d Refactor to make it easier to set the api path. comfyanonymous 2023-07-14 00:46:25 -04:00
  • 30ea187160 Merge branch 'use-relative-paths' of https://github.com/mcmonkey4eva/ComfyUI comfyanonymous 2023-07-13 23:56:29 -04:00
  • eed3042830 Move conditioning concat node to conditioning section. comfyanonymous 2023-07-13 21:43:22 -04:00
  • 8a577966c5 Enables a way to save workflows in api format in frontend. comfyanonymous 2023-07-13 21:08:54 -04:00
  • bdba394290 Add a canny preprocessor node. comfyanonymous 2023-07-13 13:26:48 -04:00
  • 6f914fb77d Print prestartup times for custom nodes. comfyanonymous 2023-07-13 13:01:45 -04:00
  • 3bc8be33e4 Don't let custom nodes overwrite base nodes. comfyanonymous 2023-07-13 12:52:42 -04:00
  • 876dadca84 Highlight nodes with errors in red even when workflow works fine. comfyanonymous 2023-07-13 02:25:38 -04:00
  • b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint. comfyanonymous 2023-07-12 20:15:02 -04:00
  • 46dc050c9f Fix potential tensors being on different devices issues. comfyanonymous 2023-07-12 19:28:48 -04:00
  • 90aa597099 Add back roundRect to fix issue on firefox ESR. comfyanonymous 2023-07-12 02:07:48 -04:00
  • 3e2309f149 fix mps miss import KarryCharon 2023-07-12 10:06:34 +08:00
  • f4b9390623 Add a random string to the temp prefix for PreviewImage. comfyanonymous 2023-07-11 17:35:55 -04:00
  • 2b2a1474f7 Move to litegraph. comfyanonymous 2023-07-11 03:12:00 -04:00
  • cef30cc6b6 Merge branch 'hidpi-canvas' of https://github.com/EHfive/ComfyUI comfyanonymous 2023-07-11 03:04:10 -04:00
  • 880c9b928b Update litegraph to latest. comfyanonymous 2023-07-11 02:56:37 -04:00
  • 05e6eac7b3 Scale graph canvas based on DPI factor Huang-Huang Bao 2023-07-08 11:27:56 +08:00
  • 99abcbef41 feat/startup-script: Feature to avoid package installation errors when installing custom nodes. (#856) Dr.Lt.Data 2023-07-11 15:33:21 +09:00
  • 606a537090 Support SDXL embedding format with 2 CLIP. comfyanonymous 2023-07-10 10:28:38 -04:00
  • 5797ff89b0 use relative paths for all web connections Alex "mcmonkey" Goodwin 2023-07-10 02:09:03 -07:00
  • 6ad0a6d7e2 Don't patch weights when multiplier is zero. comfyanonymous 2023-07-09 17:46:56 -04:00
  • af15add967 Fix annoyance with textbox unselecting in chromium. comfyanonymous 2023-07-09 15:41:19 -04:00
  • d5323d16e0 latent2rgb matrix for SDXL. comfyanonymous 2023-07-09 13:59:09 -04:00
  • 0ae81c03bb Empty cache after model unloading for normal vram and lower. comfyanonymous 2023-07-09 09:56:03 -04:00
  • d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. comfyanonymous 2023-07-09 09:33:53 -04:00
  • a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. comfyanonymous 2023-07-08 22:16:40 -04:00
  • febea8c101 Merge branch 'bugfix/img-offset' of https://github.com/ltdrdata/ComfyUI comfyanonymous 2023-07-08 03:45:37 -04:00
  • 9caab9380d fix: Image.ANTIALIAS is no longer available. (#847) Dr.Lt.Data 2023-07-08 15:36:48 +09:00
  • d43cff2105 bugfix: image widget's was mis-aligned when node has multiline widget Dr.Lt.Data 2023-07-08 01:42:33 +09:00
  • c2d407b0f7 Merge branch 'Yaruze66-patch-1' of https://github.com/Yaruze66/ComfyUI comfyanonymous 2023-07-07 01:55:10 -04:00
  • bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI comfyanonymous 2023-07-07 01:52:25 -04:00
  • 2c9d98f3e6 CLIPTextEncodeSDXL now works when prompts are of very different sizes. comfyanonymous 2023-07-06 23:21:57 -04:00
  • e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. comfyanonymous 2023-07-06 18:04:28 -04:00
  • f5232c4869 Fix 7z error when extracting package. comfyanonymous 2023-07-06 04:18:36 -04:00
  • 608fcc2591 Fix bug with weights when prompt is long. comfyanonymous 2023-07-06 02:43:40 -04:00
  • ddc6f12ad5 Disable autocast in unet for increased speed. comfyanonymous 2023-07-05 20:58:44 -04:00
  • 603f02d613 Fix loras not working when loading checkpoint with config. comfyanonymous 2023-07-05 19:37:19 -04:00
  • ccb1b25908 Add a conditioning concat node. comfyanonymous 2023-07-05 17:40:22 -04:00
  • af7a49916b Support loading unet files in diffusers format. comfyanonymous 2023-07-05 17:34:45 -04:00
  • e57cba4c61 Add gpu variations of the sde samplers that are less deterministic but faster. comfyanonymous 2023-07-05 01:37:34 -04:00
  • f81b192944 Add logit scale parameter so it's present when saving the checkpoint. comfyanonymous 2023-07-04 23:01:28 -04:00
  • acf95191ff Properly support SDXL diffusers loras for unet. comfyanonymous 2023-07-04 21:10:12 -04:00
  • c61a95f9f7 Fix size check for conditioning mask mara 2023-07-04 16:30:17 +02:00
  • 8d694cc450 Fix issue with OSX. comfyanonymous 2023-07-04 02:09:02 -04:00
  • c02f3baeaf Now the model merge blocks node will use the longest match. comfyanonymous 2023-07-04 00:51:17 -04:00
  • 3a09fac835 ConditioningAverage now also averages the pooled output. comfyanonymous 2023-07-03 21:44:37 -04:00
  • d94ddd8548 Add text encode nodes to control the extra parameters in SDXL. comfyanonymous 2023-07-03 19:10:47 -04:00
  • c3e96e637d Pass device to CLIP model. comfyanonymous 2023-07-03 16:09:02 -04:00
  • 5e6bc824aa Allow passing custom path to clip-g and clip-h. comfyanonymous 2023-07-03 15:45:04 -04:00
  • dc9d1f31c8 Improvements for OSX. comfyanonymous 2023-07-03 00:08:30 -04:00
  • 9ae6ff65bc Update extra_model_paths.yaml.example: add RealESRGAN path Yaruze66 2023-07-02 22:59:55 +05:00
  • 103c487a89 Cleanup. comfyanonymous 2023-07-02 11:57:36 -04:00
  • ae948b42fa Add taesd weights to standalones. comfyanonymous 2023-07-02 11:47:30 -04:00
  • 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. comfyanonymous 2023-07-02 09:37:31 -04:00
  • 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. comfyanonymous 2023-07-01 22:42:35 -04:00
  • 1c1b0e7299 --gpu-only now keeps the VAE on the device. comfyanonymous 2023-07-01 15:22:40 -04:00
  • ce35d8c659 Lower latency by batching some text encoder inputs. comfyanonymous 2023-07-01 15:07:39 -04:00
  • 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. comfyanonymous 2023-07-01 14:38:51 -04:00
  • b6a60fa696 Try to keep text encoders loaded and patched to increase speed. comfyanonymous 2023-07-01 13:22:51 -04:00
  • 97ee230682 Make highvram and normalvram shift the text encoders to vram and back. comfyanonymous 2023-07-01 12:37:23 -04:00
  • fa1959e3ef Fix nightly packaging. comfyanonymous 2023-07-01 01:31:03 -04:00
  • 9f2986318f Move model merging nodes to advanced and add to readme. comfyanonymous 2023-06-30 14:51:44 -04:00
  • 5a9ddf94eb LoraLoader node now caches the lora file between executions. comfyanonymous 2023-06-29 23:40:02 -04:00
  • 6e9f28401f Persist node instances between executions instead of deleting them. comfyanonymous 2023-06-29 23:38:56 -04:00
  • 9920367d3c Fix embeddings not working with --gpu-only comfyanonymous 2023-06-29 20:42:19 -04:00
  • 62db11683b Move unet to device right after loading on highvram mode. comfyanonymous 2023-06-29 11:19:58 -04:00
  • e7ed507d3d Add link to 7z in README (#809) reaper47 2023-06-29 10:09:59 +02:00
  • 4376b125eb Remove useless code. comfyanonymous 2023-06-29 00:26:33 -04:00
  • 89120f1fbe This is unused but it should be 1280. comfyanonymous 2023-06-28 18:04:23 -04:00
  • 2c7c14de56 Support for SDXL text encoder lora. comfyanonymous 2023-06-28 02:22:49 -04:00
  • fcef47f06e Fix bug. comfyanonymous 2023-06-28 00:38:07 -04:00
  • 2d880fec3a Add a node to zero out the cond to advanced/conditioning comfyanonymous 2023-06-27 23:30:52 -04:00
  • 50abf7c938 Merge branch 'patch-1' of https://github.com/jjangga0214/ComfyUI comfyanonymous 2023-06-27 01:42:16 -04:00
  • 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present. comfyanonymous 2023-06-26 12:55:07 -04:00
  • 9b93b920be Add CheckpointSave node to save checkpoints. comfyanonymous 2023-06-26 12:21:07 -04:00
  • b72a7a835a Support loras based on the stability unet implementation. comfyanonymous 2023-06-26 02:56:11 -04:00