Commit Graph

  • 6f81cd8973 Change defaults in WanImageToVideo node. comfyanonymous 2025-03-01 19:26:48 -05:00
  • 4dc6709307 Rename argument in last commit and document the options. comfyanonymous 2025-03-01 02:43:49 -05:00
  • 4d55f16ae8 Use enum list for --fast options (#7024) Chenlei Hu 2025-03-01 02:37:35 -05:00
  • cf0b549d48 --fast now takes a number as argument to indicate how fast you want it. comfyanonymous 2025-02-28 02:48:20 -05:00
  • eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. comfyanonymous 2025-02-28 02:17:50 -05:00
  • 1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. comfyanonymous 2025-02-27 16:39:57 -05:00
  • f4dac8ab6f Wan code small cleanup. comfyanonymous 2025-02-27 07:22:42 -05:00
  • b07f116dea Bump ComfyUI version to v0.3.18 comfyanonymous 2025-02-26 21:19:14 -05:00
  • 714f728820 Add to README that the Wan model is supported. comfyanonymous 2025-02-26 20:48:50 -05:00
  • 92d8d15300 Readme changes. comfyanonymous 2025-02-26 20:47:08 -05:00
  • 89253e9fe5 Support Cambricon MLU (#6964) BiologicalExplosion 2025-02-27 09:45:13 +08:00
  • 3ea3bc8546 Fix wan issues when prompt length is long. comfyanonymous 2025-02-26 20:34:02 -05:00
  • 8e69e2ddfd Bump ComfyUI version to v0.3.17 comfyanonymous 2025-02-26 17:59:10 -05:00
  • 0270a0b41c Reduce artifacts on Wan by doing the patch embedding in fp32. comfyanonymous 2025-02-26 16:59:26 -05:00
  • 26c7baf789 Bump ComfyUI version to v0.3.16 comfyanonymous 2025-02-26 14:30:32 -05:00
  • c37f15f98e Add fast preview support for Wan models. comfyanonymous 2025-02-26 08:56:23 -05:00
  • 4bca7367f3 Don't try to use clip_fea on t2v model. comfyanonymous 2025-02-26 08:38:09 -05:00
  • b6fefe686b Better wan memory estimation. comfyanonymous 2025-02-26 07:51:22 -05:00
  • fa62287f1f More code reuse in wan. comfyanonymous 2025-02-26 05:22:29 -05:00
  • 0844998db3 Slightly better wan i2v mask implementation. comfyanonymous 2025-02-26 03:49:50 -05:00
  • 4ced06b879 WIP support for Wan I2V model. comfyanonymous 2025-02-26 01:49:43 -05:00
  • cb06e9669b Wan seems to work with fp16. comfyanonymous 2025-02-25 21:37:12 -05:00
  • 0c32f82298 Fix missing frames in SaveWEBM node. comfyanonymous 2025-02-25 20:21:03 -05:00
  • 189da3726d Update README.md (#6960) Yoland Yan 2025-02-25 17:17:18 -08:00
  • 9a66bb972d Make wan work with all latent resolutions. comfyanonymous 2025-02-25 19:56:04 -05:00
  • ea0f939df3 Fix issue with wan and other attention implementations. comfyanonymous 2025-02-25 19:13:39 -05:00
  • f37551c1d2 Change wan rope implementation to the flux one. comfyanonymous 2025-02-25 19:11:14 -05:00
  • 63023011b9 WIP support for Wan t2v model. comfyanonymous 2025-02-25 17:20:35 -05:00
  • f40076096e Cleanup some lumina te code. comfyanonymous 2025-02-25 04:10:26 -05:00
  • 96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. comfyanonymous 2025-02-24 05:41:07 -05:00
  • 4553891bbd Update installation documentation to include desktop + cli. (#6899) Robin Huang 2025-02-23 16:13:39 -08:00
  • ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation comfyanonymous 2025-02-23 04:45:54 -05:00
  • aff16532d4 Remove some useless code. comfyanonymous 2025-02-22 04:45:14 -05:00
  • b50ab153f9 Bump ComfyUI version to v0.3.15 comfyanonymous 2025-02-21 20:28:28 -05:00
  • 072db3bea6 Assume the mac black image bug won't be fixed before v16. comfyanonymous 2025-02-21 20:24:07 -05:00
  • a6deca6d9a Latest mac still has the black image bug. comfyanonymous 2025-02-21 20:14:30 -05:00
  • 41c30e92e7 Let all model memory be offloaded on nvidia. comfyanonymous 2025-02-21 06:32:11 -05:00
  • f579a740dd Update frontend release schedule in README. (#6908) filtered 2025-02-21 21:58:12 +11:00
  • d37272532c Add discord channel to support section. (#6900) Robin Huang 2025-02-20 15:26:16 -08:00
  • 12da6ef581 Apparently directml supports fp16. comfyanonymous 2025-02-20 09:29:59 -05:00
  • 29d4384a75 Normalize extra_model_config.yaml paths to prevent duplicates. (#6885) Robin Huang 2025-02-20 04:09:45 -08:00
  • c5be423d6b Fix link pointing to non-exisiting docs (#6891) Silver 2025-02-20 13:07:07 +01:00
  • b4d3652d88 fixed: crash caused by outdated incompatible aiohttp dependency (#6841) Dr.Lt.Data 2025-02-19 21:15:36 +09:00
  • 5715be2ca9 Fix Hunyuan unet config detection for some models. (#6877) maedtb 2025-02-19 07:14:45 -05:00
  • 0d4d9222c6 Add early experimental SaveWEBM node to save .webm files. comfyanonymous 2025-02-19 07:11:49 -05:00
  • afc85cdeb6 Add Load Image Output node (#6790) bymyself 2025-02-18 15:53:01 -07:00
  • acc152b674 Support loading and using SkyReels-V1-Hunyuan-I2V (#6862) Jukka Seppänen 2025-02-19 00:06:54 +02:00
  • b07258cef2 Fix typo. comfyanonymous 2025-02-18 07:28:33 -05:00
  • 31e54b7052 Improve AMD arch detection. comfyanonymous 2025-02-17 04:53:40 -05:00
  • 8c0bae50c3 bf16 manual cast works on old AMD. comfyanonymous 2025-02-17 04:42:40 -05:00
  • 530412cb9d Refactor torch version checks to be more future proof. comfyanonymous 2025-02-17 04:36:45 -05:00
  • 61c8c70c6e support system prompt and cfg renorm in Lumina2 (#6795) Zhong-Yu Li 2025-02-17 07:15:43 +08:00
  • d0399f4343 Update frontend to v1.9.18 (#6828) Comfy Org PR Bot 2025-02-17 01:45:47 +09:00
  • e2919d38b4 Disable bf16 on AMD GPUs that don't support it. comfyanonymous 2025-02-16 05:45:08 -05:00
  • 93c8607d51 remove light_intensity and fov from load3d (#6742) Terry Jia 2025-02-15 15:34:36 -05:00
  • b3d6ae15b3 Update frontend to v1.9.17 (#6814) Comfy Org PR Bot 2025-02-15 18:32:47 +09:00
  • 2e21122aab Add a node to set the model compute dtype for debugging. comfyanonymous 2025-02-15 04:15:37 -05:00
  • 1cd6cd6080 Disable pytorch attention in VAE for AMD. comfyanonymous 2025-02-14 05:42:14 -05:00
  • d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7 comfyanonymous 2025-02-14 04:17:56 -05:00
  • 042a905c37 Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807) Robin Huang 2025-02-13 17:39:04 -08:00
  • 019c7029ea Add a way to set a different compute dtype for the model at runtime. comfyanonymous 2025-02-13 20:34:03 -05:00
  • 8773ccf74d Better memory estimation for ROCm that support mem efficient attention. comfyanonymous 2025-02-13 08:32:36 -05:00
  • 1d5d6586f3 Fix ruff. comfyanonymous 2025-02-12 06:49:16 -05:00
  • 35740259de mix_ascend_bf16_infer_err (#6794) zhoufan2956 2025-02-12 19:48:11 +08:00
  • ab888e1e0b Add add_weight_wrapper function to model patcher. comfyanonymous 2025-02-12 05:49:00 -05:00
  • d9f0fcdb0c Cleanup. comfyanonymous 2025-02-11 17:17:03 -05:00
  • b124256817 Fix for running via DirectML (#6542) HishamC 2025-02-11 14:11:32 -08:00
  • af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. comfyanonymous 2025-02-11 08:31:46 -05:00
  • e57d2282d1 Fix incorrect Content-Type for WebP images (#6752) bananasss00 2025-02-11 12:48:35 +03:00
  • 4027466c80 Make lumina model work with any latent resolution. comfyanonymous 2025-02-10 00:24:20 -05:00
  • 095d867147 Remove useless function. comfyanonymous 2025-02-09 07:01:38 -05:00
  • caeb27c3a5 res_multistep: Fix cfgpp and add ancestral samplers (#6731) Pam 2025-02-09 05:39:58 +05:00
  • 3d06e1c555 Make error more clear to user. comfyanonymous 2025-02-08 18:57:24 -05:00
  • 43a74c0de1 Allow FP16 accumulation with --fast (#6453) catboxanon 2025-02-08 17:00:56 -05:00
  • af93c8d1ee Document which text encoder to use for lumina 2. comfyanonymous 2025-02-08 06:54:03 -05:00
  • 832e3f5ca3 Fix another small bug in attention_bias redux (#6737) Raphael Walker 2025-02-07 20:44:43 +01:00
  • 079eccc92a Don't compress http response by default. comfyanonymous 2025-02-07 03:29:12 -05:00
  • b6951768c4 fix a bug in the attn_masked redux code when using weight=1.0 (#6721) Raphael Walker 2025-02-06 22:51:16 +01:00
  • fca304debf Update frontend to v1.8.14 (#6724) Comfy Org PR Bot 2025-02-07 00:43:10 +09:00
  • 14880e6dba Remove some useless code. comfyanonymous 2025-02-06 05:00:19 -05:00
  • f1059b0b82 Remove unused GET /files API endpoint (#6714) Chenlei Hu 2025-02-05 18:48:36 -05:00
  • debabccb84 Bump ComfyUI version to v0.3.14 comfyanonymous 2025-02-05 15:47:46 -05:00
  • 37cd448529 Set the shift for Lumina back to 6. comfyanonymous 2025-02-05 14:49:52 -05:00
  • 94f21f9301 Upcasting rope to fp32 seems to make no difference in this model. comfyanonymous 2025-02-05 04:32:47 -05:00
  • 60653004e5 Use regular numbers for rope in lumina model. comfyanonymous 2025-02-05 04:16:59 -05:00
  • a57d635c5f Fix lumina 2 batches. comfyanonymous 2025-02-04 21:48:11 -05:00
  • 016b219dcc Add Lumina Image 2.0 to Readme. comfyanonymous 2025-02-04 08:08:36 -05:00
  • 8ac2dddeed Lower the default shift of lumina to reduce artifacts. comfyanonymous 2025-02-04 06:50:37 -05:00
  • 3e880ac709 Fix on python 3.9 comfyanonymous 2025-02-04 04:20:56 -05:00
  • e5ea112a90 Support Lumina 2 model. comfyanonymous 2025-02-04 03:56:00 -05:00
  • 8d88bfaff9 allow searching for new .pt2 extension, which can contain AOTI compiled modules (#6689) Raphael Walker 2025-02-03 23:07:35 +01:00
  • ed4d92b721 Model merging nodes for cosmos. comfyanonymous 2025-02-03 03:31:39 -05:00
  • 932ae8d9ca Update frontend to v1.8.13 (#6682) Comfy Org PR Bot 2025-02-03 07:54:44 +09:00
  • 44e19a28d3 Use maximum negative value instead of -inf for masks in text encoders. comfyanonymous 2025-02-02 09:45:07 -05:00
  • 0a0df5f136 better guide message for sageattention (#6634) Dr.Lt.Data 2025-02-02 23:26:47 +09:00
  • 24d6871e47 add disable-compres-response-body cli args; add compress middleware; (#6672) KarryCharon 2025-02-02 22:24:55 +08:00
  • 9e1d301129 Only use stable cascade lora format with cascade model. comfyanonymous 2025-02-01 06:35:22 -05:00
  • 768e035868 Add node for preview 3d animation (#6594) Terry Jia 2025-01-31 13:09:07 -05:00
  • 669e0497ea Update frontend to v1.8.12 (#6662) Comfy Org PR Bot 2025-02-01 03:07:37 +09:00
  • 541dc08547 Update Readme. comfyanonymous 2025-01-31 08:35:48 -05:00