Commit Graph

  • dea899f221 Unload weights if vram usage goes up between runs. (#10690) comfyanonymous 2025-11-09 15:51:33 -08:00
  • e632e5de28 Add logging for model unloading. (#10692) comfyanonymous 2025-11-09 15:06:39 -08:00
  • 2abd2b5c20 Make ScaleROPE node work on Flux. (#10686) comfyanonymous 2025-11-08 12:52:02 -08:00
  • a1a70362ca Only unpin tensor if it was pinned by ComfyUI (#10677) comfyanonymous 2025-11-07 08:15:05 -08:00
  • cf97b033ee mm: guard against double pin and unpin explicitly (#10672) rattus 2025-11-07 12:20:48 +10:00
  • eb1c42f649 Tell users they need to upload their logs in bug reports. (#10671) comfyanonymous 2025-11-06 17:24:28 -08:00
  • e05c907126 Clarify release cycle. (#10667) comfyanonymous 2025-11-06 01:11:30 -08:00
  • 09dc24c8a9 Pinned mem also seems to work on AMD. (#10658) comfyanonymous 2025-11-05 16:11:15 -08:00
  • 1d69245981 Enable pinned memory by default on Nvidia. (#10656) comfyanonymous 2025-11-05 15:08:13 -08:00
  • 97f198e421 Fix qwen controlnet regression. (#10657) comfyanonymous 2025-11-05 15:07:35 -08:00
  • bda0eb2448 feat(API-nodes): move Rodin3D nodes to new client; removed old api client.py (#10645) Alexander Piskun 2025-11-05 12:16:00 +02:00
  • c4a6b389de Lower ltxv mem usage to what it was before previous pr. (#10643) comfyanonymous 2025-11-04 19:47:35 -08:00
  • 4cd881866b Use single apply_rope function across models (#10547) contentis 2025-11-05 02:10:11 +01:00
  • 265adad858 ComfyUI version v0.3.68 comfyanonymous 2025-11-04 19:42:23 -05:00
  • 7f3e4d486c Limit amount of pinned memory on windows to prevent issues. (#10638) comfyanonymous 2025-11-04 14:37:50 -08:00
  • a389ee01bb caching: Handle None outputs tuple case (#10637) rattus 2025-11-05 08:14:10 +10:00
  • 9c71a66790 chore: update workflow templates to v0.2.11 (#10634) ComfyUI Wiki 2025-11-05 02:51:53 +08:00
  • af4b7b5edb More fp8 torch.compile regressions fixed. (#10625) comfyanonymous 2025-11-03 19:14:20 -08:00
  • 0f4ef3afa0 This seems to slow things down slightly on Linux. (#10624) comfyanonymous 2025-11-03 18:47:14 -08:00
  • 6b88478f9f Bring back fp8 torch compile performance to what it should be. (#10622) comfyanonymous 2025-11-03 16:22:10 -08:00
  • e199c8cc67 Fixes (#10621) comfyanonymous 2025-11-03 14:58:24 -08:00
  • 0652cb8e2d Speed up torch.compile (#10620) comfyanonymous 2025-11-03 14:37:12 -08:00
  • 958a17199a People should update their pytorch versions. (#10618) comfyanonymous 2025-11-03 14:08:30 -08:00
  • e974e554ca chore: update embedded docs to v0.3.1 (#10614) ComfyUI Wiki 2025-11-04 02:59:44 +08:00
  • 4e2110c794 feat(Pika-API-nodes): use new API client (#10608) Alexander Piskun 2025-11-03 10:29:08 +02:00
  • e617cddf24 convert nodes_openai.py to V3 schema (#10604) Alexander Piskun 2025-11-03 10:28:13 +02:00
  • 1f3f7a2823 convert nodes_hypernetwork.py to V3 schema (#10583) Alexander Piskun 2025-11-03 10:21:47 +02:00
  • 88df172790 fix(caching): treat bytes as hashable (#10567) EverNebula 2025-11-03 16:16:40 +08:00
  • 6d6a18b0b7 fix(api-nodes-cloud): stop using sub-folder and absolute path for output of Rodin3D nodes (#10556) Alexander Piskun 2025-11-03 10:04:56 +02:00
  • 97ff9fae7e Clarify help text for --fast argument (#10609) comfyanonymous 2025-11-02 10:14:04 -08:00
  • 135fa49ec2 Small speed improvements to --async-offload (#10593) rattus 2025-11-02 08:48:53 +10:00
  • 44869ff786 Fix issue with pinned memory. (#10597) comfyanonymous 2025-11-01 14:25:59 -07:00
  • 20182a393f convert StabilityAI to use new API client (#10582) Alexander Piskun 2025-11-01 21:14:06 +02:00
  • 5f109fe6a0 added 12s-20s as available output durations for the LTXV API nodes (#10570) Alexander Piskun 2025-11-01 21:13:39 +02:00
  • c58c13b2ba Fix torch compile regression on fp8 ops. (#10580) comfyanonymous 2025-10-31 21:25:17 -07:00
  • 7f374e42c8 ScaleROPE now works on Lumina models. (#10578) comfyanonymous 2025-10-31 12:41:40 -07:00
  • 27d1bd8829 Fix rope scaling. (#10560) comfyanonymous 2025-10-30 19:51:58 -07:00
  • 614cf9805e Add a ScaleROPE node. Currently only works on WAN models. (#10559) comfyanonymous 2025-10-30 19:11:38 -07:00
  • 513b0c46fb Add RAM Pressure cache mode (#10454) rattus 2025-10-31 07:39:02 +10:00
  • dfac94695b fix img2img operation in Dall2 node (#10552) Alexander Piskun 2025-10-30 19:22:35 +02:00
  • 163b629c70 use new API client in Pixverse and Ideogram nodes (#10543) Alexander Piskun 2025-10-30 08:49:03 +02:00
  • 998bf60beb Add units/info for the numbers displayed on 'load completely' and 'load partially' log messages (#10538) Jedrzej Kosinski 2025-10-29 16:37:06 -07:00
  • 906c089957 Fix small performance regression with fp8 fast and scaled fp8. (#10537) comfyanonymous 2025-10-29 16:29:01 -07:00
  • 25de7b1bfa Try to fix slow load issue on low ram hardware with pinned mem. (#10536) comfyanonymous 2025-10-29 14:20:27 -07:00
  • ab7ab5be23 Fix Race condition in --async-offload that can cause corruption (#10501) rattus 2025-10-30 07:17:46 +10:00
  • ec4fc2a09a Fix case of weights not being unpinned. (#10533) comfyanonymous 2025-10-29 12:48:06 -07:00
  • 1a58087ac2 Reduce memory usage for fp8 scaled op. (#10531) comfyanonymous 2025-10-29 12:43:51 -07:00
  • 6c14f3afac use new API client in Luma and Minimax nodes (#10528) Alexander Piskun 2025-10-29 20:14:56 +02:00
  • e525673f72 Fix issue. (#10527) comfyanonymous 2025-10-28 21:37:00 -07:00
  • 3fa7a5c04a Speed up offloading using pinned memory. (#10526) comfyanonymous 2025-10-28 21:21:01 -07:00
  • 210f7a1ba5 convert nodes_recraft.py to V3 schema (#10507) Alexander Piskun 2025-10-28 23:38:05 +02:00
  • d202c2ba74 execution: Allow a subgraph nodes to execute multiple times (#10499) rattus 2025-10-29 06:22:08 +10:00
  • 8817f8fc14 Mixed Precision Quantization System (#10498) contentis 2025-10-28 21:20:53 +01:00
  • 22e40d2ace Tell users to update their nvidia drivers if portable doesn't start. (#10518) comfyanonymous 2025-10-28 12:08:08 -07:00
  • 3bea4efc6b Tell users to update nvidia drivers if problem with portable. (#10510) comfyanonymous 2025-10-28 01:45:45 -07:00
  • 8cf2ba4ba6 Remove comfy api key from queue api. (#10502) comfyanonymous 2025-10-28 00:23:52 -07:00
  • b61a40cbc9 Bump stable portable to cu130 python 3.13.9 (#10508) comfyanonymous 2025-10-28 00:21:45 -07:00
  • f2bb3230b7 ComfyUI version v0.3.67 comfyanonymous 2025-10-28 03:03:59 -04:00
  • 614b8d3345 frontend bump to 1.28.8 (#10506) Jedrzej Kosinski 2025-10-28 00:01:13 -07:00
  • 6abc30aae9 Update template to 0.2.4 (#10505) ComfyUI Wiki 2025-10-28 13:56:30 +08:00
  • 55bad30375 feat(api-nodes): add LTXV API nodes (#10496) Alexander Piskun 2025-10-28 07:25:29 +02:00
  • c305deed56 Update template to 0.2.3 (#10503) ComfyUI Wiki 2025-10-28 13:24:16 +08:00
  • 601ee1775a Add a bat to run comfyui portable without api nodes. (#10504) comfyanonymous 2025-10-27 20:54:00 -07:00
  • c170fd2db5 Bump portable deps workflow to torch cu130 python 3.13.9 (#10493) comfyanonymous 2025-10-26 17:23:01 -07:00
  • 9d529e5308 fix(api-nodes): random issues on Windows by capturing general OSError for retries (#10486) Alexander Piskun 2025-10-26 08:51:06 +02:00
  • f6bbc1ac84 Fix mistake. (#10484) comfyanonymous 2025-10-25 20:07:29 -07:00
  • 098a352f13 Add warning for torch-directml usage (#10482) comfyanonymous 2025-10-25 17:05:22 -07:00
  • e86b79ab9e convert Gemini API nodes to V3 schema (#10476) Alexander Piskun 2025-10-26 00:35:30 +03:00
  • 426cde37f1 Remove useless function (#10472) comfyanonymous 2025-10-24 16:56:51 -07:00
  • dd5af0c587 convert Tripo API nodes to V3 schema (#10469) Alexander Piskun 2025-10-25 01:48:34 +03:00
  • 388b306a2b feat(api-nodes): network client v2: async ops, cancellation, downloads, refactor (#10390) Alexander Piskun 2025-10-24 08:37:16 +03:00
  • 24188b3141 Update template to 0.2.2 (#10461) ComfyUI Wiki 2025-10-24 13:36:30 +08:00
  • 1bcda6df98 WIP way to support multi multi dimensional latents. (#10456) comfyanonymous 2025-10-23 18:21:14 -07:00
  • a1864c01f2 Small readme improvement. (#10442) comfyanonymous 2025-10-22 14:26:22 -07:00
  • 4739d7717f execution: fold in dependency aware caching / Fix --cache-none with loops/lazy etc (Resubmit) (#10440) rattus 2025-10-23 05:49:05 +10:00
  • f13cff0be6 Add custom node published subgraphs endpoint (#10438) Jedrzej Kosinski 2025-10-21 20:16:16 -07:00
  • 9cdc64998f Only disable cudnn on newer AMD GPUs. (#10437) comfyanonymous 2025-10-21 16:15:23 -07:00
  • 560b1bdfca ComfyUI version v0.3.66 comfyanonymous 2025-10-20 15:44:38 -04:00
  • b7992f871a Revert "execution: fold in dependency aware caching / Fix --cache-none with l…" (#10422) comfyanonymous 2025-10-20 16:03:06 -07:00
  • 2c2aa409b0 Log message for cudnn disable on AMD. (#10418) comfyanonymous 2025-10-20 12:43:24 -07:00
  • a4787ac83b Update template to 0.2.1 (#10413) ComfyUI Wiki 2025-10-21 03:28:36 +08:00
  • b5c59b763c Deprecation warning on unused files (#10387) Christian Byrne 2025-10-19 13:05:46 -07:00
  • b4f30bd408 Pytorch is stupid. (#10398) comfyanonymous 2025-10-18 22:25:35 -07:00
  • dad076aee6 Speed up chroma radiance. (#10395) comfyanonymous 2025-10-18 20:19:52 -07:00
  • 0cf33953a7 Fix batch size above 1 giving bad output in chroma radiance. (#10394) comfyanonymous 2025-10-18 20:15:34 -07:00
  • 5b80addafd Turn off cuda malloc by default when --fast autotune is turned on. (#10393) comfyanonymous 2025-10-18 19:35:46 -07:00
  • 9da397ea2f Disable torch compiler for cast_bias_weight function (#10384) comfyanonymous 2025-10-17 17:03:28 -07:00
  • 92d97380bd Update Python 3.14 installation instructions (#10385) comfyanonymous 2025-10-17 15:22:59 -07:00
  • 99ce2a1f66 convert nodes_controlnet.py to V3 schema (#10202) Alexander Piskun 2025-10-18 00:13:05 +03:00
  • b1467da480 execution: fold in dependency aware caching / Fix --cache-none with loops/lazy etc (#10368) rattus128 2025-10-18 06:55:15 +10:00
  • d8d60b5609 Do batch_slice in EasyCache's apply_cache_diff (#10376) Jedrzej Kosinski 2025-10-16 21:39:37 -07:00
  • b1293d50ef workaround also works on cudnn 91200 (#10375) comfyanonymous 2025-10-16 16:59:56 -07:00
  • 19b466160c Workaround for nvidia issue where VAE uses 3x more memory on torch 2.9 (#10373) comfyanonymous 2025-10-16 15:16:03 -07:00
  • bc0ad9bb49 fix(api-nodes): remove "veo2" model from Veo3 node (#10372) Alexander Piskun 2025-10-16 20:12:50 +03:00
  • 4054b4bf38 feat: deprecated API alert (#10366) Rizumu Ayaka 2025-10-16 16:13:31 +08:00
  • 55ac7d333c Bump frontend to 1.28.7 (#10364) Arjan Singh 2025-10-15 20:30:39 -07:00
  • afa8a24fe1 refactor: Replace manual patches merging with merge_nested_dicts (#10360) Faych 2025-10-16 01:16:09 +01:00
  • 493b81e48f Fix order of inputs nested merge_nested_dicts (#10362) Jedrzej Kosinski 2025-10-15 16:47:26 -07:00
  • 6b035bfce2 Latest pytorch stable is cu130 (#10361) comfyanonymous 2025-10-15 15:48:12 -07:00
  • 74b7f0b04b feat(api-nodes): add Veo3.1 model (#10357) Alexander Piskun 2025-10-16 01:41:45 +03:00