mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-01 08:21:50 +08:00
Compare commits
5 Commits
opencode-p
...
feat/comfy
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dac60fb903 | ||
|
|
b3873181a4 | ||
|
|
68041b34f5 | ||
|
|
bd6e0cc63c | ||
|
|
5b78e07da7 |
634
optional-skills/creative/comfyui/SKILL.md
Normal file
634
optional-skills/creative/comfyui/SKILL.md
Normal file
@@ -0,0 +1,634 @@
|
||||
---
|
||||
name: comfyui
|
||||
description: "Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST API for execution."
|
||||
version: 4.0.0
|
||||
requires: ComfyUI (local or Comfy Cloud); comfy-cli (pip install comfy-cli)
|
||||
author: [kshitijk4poor, alt-glitch]
|
||||
license: MIT
|
||||
platforms: [macos, linux, windows]
|
||||
prerequisites:
|
||||
commands: ["python3"]
|
||||
setup:
|
||||
help: "pip install comfy-cli && comfy install. Cloud: get API key at platform.comfy.org"
|
||||
metadata:
|
||||
hermes:
|
||||
tags:
|
||||
- comfyui
|
||||
- image-generation
|
||||
- stable-diffusion
|
||||
- flux
|
||||
- creative
|
||||
- generative-ai
|
||||
- video-generation
|
||||
related_skills: [stable-diffusion-image-generation, image_gen]
|
||||
category: creative
|
||||
---
|
||||
|
||||
# ComfyUI
|
||||
|
||||
Generate images, video, and audio through ComfyUI using the official `comfy-cli` for
|
||||
setup/management and direct REST API calls for workflow execution.
|
||||
|
||||
**Reference files in this skill:**
|
||||
|
||||
- `references/official-cli.md` — comfy-cli command reference (install, launch, nodes, models)
|
||||
- `references/rest-api.md` — ComfyUI REST API endpoints (local + cloud)
|
||||
- `references/workflow-format.md` — workflow JSON format, common node types, parameter mapping
|
||||
|
||||
**Scripts in this skill:**
|
||||
|
||||
- `scripts/comfyui_setup.sh` — full setup automation (install + launch + verify)
|
||||
- `scripts/extract_schema.py` — reads workflow JSON, outputs which parameters are controllable
|
||||
- `scripts/run_workflow.py` — injects user args, submits workflow, monitors progress, downloads outputs
|
||||
- `scripts/check_deps.py` — checks if required custom nodes and models are installed
|
||||
|
||||
## When to Use
|
||||
|
||||
- User asks to generate images with Stable Diffusion, SDXL, Flux, or other diffusion models
|
||||
- User wants to run a specific ComfyUI workflow
|
||||
- User wants to chain generative steps (txt2img → upscale → face restore)
|
||||
- User needs ControlNet, inpainting, img2img, or other advanced pipelines
|
||||
- User asks to manage ComfyUI queue, check models, or install custom nodes
|
||||
- User wants video/audio generation via AnimateDiff, Hunyuan, AudioCraft, etc.
|
||||
|
||||
## Architecture: Two Layers
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Layer 1: comfy-cli (official) │
|
||||
│ Setup, lifecycle, nodes, models │
|
||||
│ comfy install / launch / stop / node / model │
|
||||
└─────────────────────────┬───────────────────────────┘
|
||||
│
|
||||
┌─────────────────────────▼───────────────────────────┐
|
||||
│ Layer 2: REST API + skill scripts │
|
||||
│ Workflow execution, param injection, monitoring │
|
||||
│ POST /api/prompt, GET /api/view, WebSocket │
|
||||
│ scripts/run_workflow.py, extract_schema.py │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why two layers?** The official CLI handles installation and server management excellently
|
||||
but has minimal workflow execution support (just raw file submission, no param injection,
|
||||
no structured output). The REST API fills that gap — the scripts in this skill handle the
|
||||
param injection, execution monitoring, and output download that the CLI doesn't do.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Detect Environment
|
||||
|
||||
```bash
|
||||
# What's available?
|
||||
command -v comfy >/dev/null 2>&1 && echo "comfy-cli: installed"
|
||||
curl -s http://127.0.0.1:8188/system_stats 2>/dev/null && echo "server: running"
|
||||
```
|
||||
|
||||
If nothing is installed, go to **Setup & Onboarding** below.
|
||||
If the server is already running, skip to **Core Workflow**.
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: Get a Workflow
|
||||
|
||||
Users provide workflow JSON files. These come from:
|
||||
- ComfyUI web editor → "Save (API Format)" button
|
||||
- Community downloads (civitai, Reddit, Discord)
|
||||
- The `scripts/` directory of this skill (example workflows)
|
||||
|
||||
**The workflow must be in API format** (node IDs as keys with `class_type`).
|
||||
If user has editor format (has `nodes[]` and `links[]` at top level), they
|
||||
need to re-export using "Save (API Format)" in the ComfyUI web editor.
|
||||
|
||||
### Step 2: Understand What's Controllable
|
||||
|
||||
```bash
|
||||
python3 scripts/extract_schema.py workflow_api.json
|
||||
```
|
||||
|
||||
Output (JSON):
|
||||
```json
|
||||
{
|
||||
"parameters": {
|
||||
"prompt": {"node_id": "6", "field": "text", "type": "string", "value": "a cat"},
|
||||
"negative_prompt": {"node_id": "7", "field": "text", "type": "string", "value": "bad quality"},
|
||||
"seed": {"node_id": "3", "field": "seed", "type": "int", "value": 42},
|
||||
"steps": {"node_id": "3", "field": "steps", "type": "int", "value": 20},
|
||||
"width": {"node_id": "5", "field": "width", "type": "int", "value": 512},
|
||||
"height": {"node_id": "5", "field": "height", "type": "int", "value": 512}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Run with Parameters
|
||||
|
||||
**Local:**
|
||||
```bash
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a beautiful sunset over mountains", "seed": 123, "steps": 30}' \
|
||||
--output-dir ./outputs
|
||||
```
|
||||
|
||||
**Cloud:**
|
||||
```bash
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a beautiful sunset", "seed": 123}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs
|
||||
```
|
||||
|
||||
### Step 4: Present Results
|
||||
|
||||
The script outputs JSON with file paths:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"outputs": [
|
||||
{"file": "./outputs/ComfyUI_00001_.png", "node_id": "9", "type": "image"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Show images to the user via `vision_analyze` or return the file path directly.
|
||||
|
||||
## Decision Tree
|
||||
|
||||
| User says | Tool | Command |
|
||||
|-----------|------|---------|
|
||||
| "install ComfyUI" | comfy-cli | `comfy install` |
|
||||
| "start ComfyUI" | comfy-cli | `comfy launch --background` |
|
||||
| "stop ComfyUI" | comfy-cli | `comfy stop` |
|
||||
| "install X node" | comfy-cli | `comfy node install <name>` |
|
||||
| "download X model" | comfy-cli | `comfy model download --url <url>` |
|
||||
| "list installed models" | comfy-cli | `comfy model list` |
|
||||
| "list installed nodes" | comfy-cli | `comfy node show installed` |
|
||||
| "generate an image" | script | `run_workflow.py --args '{"prompt": "..."}'` |
|
||||
| "use this image" (img2img) | REST | upload image, then run_workflow.py |
|
||||
| "what can I change in this workflow?" | script | `extract_schema.py workflow.json` |
|
||||
| "check if workflow deps are met" | script | `check_deps.py workflow.json` |
|
||||
| "what's in the queue?" | REST | `curl http://HOST:8188/queue` |
|
||||
| "cancel that" | REST | `curl -X POST http://HOST:8188/interrupt` |
|
||||
| "free GPU memory" | REST | `curl -X POST http://HOST:8188/free` |
|
||||
|
||||
## Setup & Onboarding
|
||||
|
||||
When a user asks to set up ComfyUI, **detect their system first** before asking questions.
|
||||
Run the checks below, then recommend the best path based on what you find.
|
||||
|
||||
**Official docs:** https://docs.comfy.org/installation
|
||||
**CLI docs:** https://docs.comfy.org/comfy-cli/getting-started
|
||||
**Cloud docs:** https://docs.comfy.org/get_started/cloud
|
||||
|
||||
### Step 1: Detect System & Capabilities
|
||||
|
||||
Run these checks to understand what the user has:
|
||||
|
||||
```bash
|
||||
# OS detection
|
||||
uname -s # Darwin = macOS, Linux = Linux
|
||||
# or on Windows: check via platform in python
|
||||
|
||||
# GPU detection
|
||||
# macOS (Apple Silicon):
|
||||
sysctl -n machdep.cpu.brand_string 2>/dev/null
|
||||
system_profiler SPDisplaysDataType 2>/dev/null | grep "Chip\|Metal\|VRAM\|Model"
|
||||
|
||||
# Linux (NVIDIA):
|
||||
nvidia-smi --query-gpu=name,memory.total --format=csv,noheader 2>/dev/null
|
||||
|
||||
# Linux (AMD):
|
||||
rocm-smi --showproductname 2>/dev/null || lspci | grep -i "VGA\|3D" 2>/dev/null
|
||||
|
||||
# RAM
|
||||
# macOS:
|
||||
sysctl -n hw.memsize 2>/dev/null | awk '{print $0/1024/1024/1024 " GB"}'
|
||||
# Linux:
|
||||
free -h | grep Mem | awk '{print $2}'
|
||||
|
||||
# Disk space (need ~15GB minimum, more for models)
|
||||
df -h . | tail -1 | awk '{print $4 " available"}'
|
||||
|
||||
# Python version
|
||||
python3 --version 2>/dev/null
|
||||
|
||||
# Is comfy-cli already installed?
|
||||
command -v comfy >/dev/null 2>&1 && comfy which 2>/dev/null
|
||||
|
||||
# Is a server already running?
|
||||
curl -s http://127.0.0.1:8188/system_stats 2>/dev/null
|
||||
```
|
||||
|
||||
### Step 2: Recommend Installation Path
|
||||
|
||||
Based on detection results, recommend ONE path:
|
||||
|
||||
| System detected | Recommendation |
|
||||
|----------------|----------------|
|
||||
| **macOS + Apple Silicon (M1/M2/M3/M4)** | ComfyUI Desktop for Mac — one-click install, Metal acceleration. Or `comfy-cli` if they prefer terminal. |
|
||||
| **Windows + NVIDIA GPU (≥8GB VRAM)** | ComfyUI Desktop for Windows — easiest path. Or Portable build for power users. |
|
||||
| **Windows + NVIDIA GPU (<8GB VRAM)** | Portable build with `--lowvram` flag. Or Cloud if <4GB. |
|
||||
| **Linux + NVIDIA GPU** | `comfy-cli` — best for headless/server setups. `comfy install --nvidia` |
|
||||
| **Linux + AMD GPU (ROCm)** | `comfy-cli` with `comfy install --amd`. Only Linux supported for AMD. |
|
||||
| **Any system + no GPU / weak GPU** | **Comfy Cloud** — no local install, runs on RTX 6000 Pro. Just needs API key. |
|
||||
| **Intel Arc GPU** | Manual install with `torch.xpu`. Not supported by comfy-cli. |
|
||||
| **Already has ComfyUI running** | Skip install — go straight to workflow execution. |
|
||||
|
||||
### Hardware Requirements
|
||||
|
||||
| Tier | VRAM | What it can run |
|
||||
|------|------|----------------|
|
||||
| Minimum | 4GB | SD 1.5 with `--lowvram`, slow |
|
||||
| Recommended | 8GB | SD 1.5, some SDXL with optimizations |
|
||||
| Comfortable | 12GB+ | SDXL, Flux, most workflows |
|
||||
| Ideal | 16-24GB+ | Everything including video generation |
|
||||
| Cloud | N/A | RTX 6000 Pro (48GB) — runs anything |
|
||||
|
||||
**RAM:** 16GB minimum, 32GB recommended (models load into system RAM first).
|
||||
**Disk:** ~15GB for ComfyUI + Python. Each model is 2-7GB. Plan for 50-100GB total.
|
||||
|
||||
### Choosing an Installation Path
|
||||
|
||||
| Situation | Recommended Path |
|
||||
|-----------|-----------------|
|
||||
| No GPU / just want to try it | **Comfy Cloud** (zero setup) |
|
||||
| Windows + NVIDIA GPU + non-technical | **ComfyUI Desktop** (one-click installer) |
|
||||
| Windows + NVIDIA GPU + technical | **Portable build** or **comfy-cli** |
|
||||
| Linux + any GPU | **comfy-cli** (easiest) or manual install |
|
||||
| macOS + Apple Silicon | **ComfyUI Desktop** (macOS) or **comfy-cli** |
|
||||
| Headless / server / CI | **comfy-cli** |
|
||||
|
||||
---
|
||||
|
||||
### Path A: Comfy Cloud (No Local Install)
|
||||
|
||||
For users without a capable GPU or who want zero setup.
|
||||
Powered by RTX 6000 Pro GPUs, all models pre-installed.
|
||||
|
||||
**Docs:** https://docs.comfy.org/get_started/cloud
|
||||
|
||||
1. Go to https://comfy.org/cloud and sign up
|
||||
2. Get an API key at https://platform.comfy.org/login
|
||||
- Click `+ New` in API Keys section → Generate
|
||||
- Save immediately (only visible once)
|
||||
3. Set the key:
|
||||
```bash
|
||||
export COMFY_CLOUD_API_KEY="comfyui-xxxxxxxxxxxx"
|
||||
```
|
||||
4. Run workflows via the script or web UI:
|
||||
```bash
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "a cat"}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs
|
||||
```
|
||||
|
||||
**Pricing:** https://www.comfy.org/cloud/pricing
|
||||
Subscription required. Concurrent limits: Free/Standard: 1 job, Creator: 3, Pro: 5.
|
||||
|
||||
---
|
||||
|
||||
### Path B: ComfyUI Desktop (Windows/macOS)
|
||||
|
||||
One-click installer for non-technical users. Currently Beta.
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/desktop
|
||||
|
||||
- **Windows (NVIDIA):** https://download.comfy.org/windows/nsis/x64
|
||||
- **macOS (Apple Silicon):** Available from https://comfy.org (download page)
|
||||
|
||||
Steps:
|
||||
1. Download and run installer
|
||||
2. Select GPU type (NVIDIA recommended, or CPU mode)
|
||||
3. Choose install location (SSD recommended, ~15GB needed)
|
||||
4. Optionally migrate from existing ComfyUI Portable install
|
||||
5. Desktop launches automatically — web UI opens in browser
|
||||
|
||||
Desktop manages its own Python environment. For CLI access to the bundled env:
|
||||
```bash
|
||||
cd <install_dir>/ComfyUI
|
||||
.venv/Scripts/activate # Windows
|
||||
# or use the built-in terminal in the Desktop UI
|
||||
```
|
||||
|
||||
**Limitations:** Desktop uses stable releases (may lag behind latest).
|
||||
Linux not supported for Desktop — use comfy-cli or manual install.
|
||||
|
||||
---
|
||||
|
||||
### Path C: ComfyUI Portable (Windows Only)
|
||||
|
||||
Standalone package with embedded Python. Extract and run. No install.
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/comfyui_portable_windows
|
||||
|
||||
1. Download from https://github.com/comfyanonymous/ComfyUI/releases
|
||||
- Standard: Python 3.13 + CUDA 13.0 (modern NVIDIA GPUs)
|
||||
- Alt: PyTorch CUDA 12.6 + Python 3.12 (NVIDIA 10 series and older)
|
||||
- AMD (experimental)
|
||||
2. Extract with 7-Zip
|
||||
3. Run `run_nvidia_gpu.bat` (or `run_cpu.bat`)
|
||||
4. Wait for "To see the GUI go to: http://127.0.0.1:8188"
|
||||
|
||||
Update: run `update/update_comfyui.bat` (latest commit) or
|
||||
`update/update_comfyui_stable.bat` (latest stable release).
|
||||
|
||||
---
|
||||
|
||||
### Path D: comfy-cli (All Platforms — Recommended for Agents)
|
||||
|
||||
The official CLI is the best path for headless/automated setups.
|
||||
|
||||
**Docs:** https://docs.comfy.org/comfy-cli/getting-started
|
||||
**Repo:** https://github.com/Comfy-Org/comfy-cli
|
||||
|
||||
#### Prerequisites
|
||||
- Python 3.10+ (3.13 recommended)
|
||||
- pip (or conda/uv)
|
||||
- GPU drivers installed (CUDA for NVIDIA, ROCm for AMD)
|
||||
|
||||
#### Install comfy-cli
|
||||
|
||||
```bash
|
||||
pip install comfy-cli
|
||||
# or
|
||||
uvx --from comfy-cli comfy --help
|
||||
```
|
||||
|
||||
Disable analytics (avoids interactive prompt):
|
||||
```bash
|
||||
comfy --skip-prompt tracking disable
|
||||
```
|
||||
|
||||
#### Install ComfyUI
|
||||
|
||||
```bash
|
||||
# Interactive (prompts for GPU type)
|
||||
comfy install
|
||||
|
||||
# Non-interactive variants:
|
||||
comfy --skip-prompt install --nvidia # NVIDIA (CUDA)
|
||||
comfy --skip-prompt install --amd # AMD (ROCm, Linux)
|
||||
comfy --skip-prompt install --m-series # Apple Silicon (MPS)
|
||||
comfy --skip-prompt install --cpu # CPU only (slow)
|
||||
|
||||
# With faster dependency resolution:
|
||||
comfy --skip-prompt install --nvidia --fast-deps
|
||||
```
|
||||
|
||||
Default location: `~/comfy/ComfyUI` (Linux), `~/Documents/comfy/ComfyUI` (macOS/Win).
|
||||
Override with: `comfy --workspace /custom/path install`
|
||||
|
||||
#### Launch Server
|
||||
|
||||
```bash
|
||||
comfy launch --background # background daemon on :8188
|
||||
comfy launch # foreground (see logs)
|
||||
comfy launch -- --listen 0.0.0.0 # accessible on LAN
|
||||
comfy launch -- --port 8190 # custom port
|
||||
comfy launch -- --lowvram # low VRAM mode (6GB cards)
|
||||
```
|
||||
|
||||
Verify server is running:
|
||||
```bash
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
```
|
||||
|
||||
Stop background server:
|
||||
```bash
|
||||
comfy stop
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Path E: Manual Install (Advanced / All Hardware)
|
||||
|
||||
For full control or unsupported hardware (Ascend NPU, Cambricon MLU, Intel Arc).
|
||||
|
||||
**Docs:** https://docs.comfy.org/installation/manual_install
|
||||
**GitHub:** https://github.com/comfyanonymous/ComfyUI
|
||||
|
||||
```bash
|
||||
# 1. Create environment
|
||||
conda create -n comfyenv python=3.13
|
||||
conda activate comfyenv
|
||||
|
||||
# 2. Clone
|
||||
git clone https://github.com/comfyanonymous/ComfyUI.git
|
||||
cd ComfyUI
|
||||
|
||||
# 3. Install PyTorch (pick your hardware)
|
||||
# NVIDIA:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
|
||||
# AMD (ROCm 6.4):
|
||||
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4
|
||||
# Apple Silicon:
|
||||
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
# Intel Arc:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/xpu
|
||||
# CPU only:
|
||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
|
||||
|
||||
# 4. Install ComfyUI deps
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 5. Run
|
||||
python main.py
|
||||
# With options: python main.py --listen 0.0.0.0 --port 8188
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Post-Install: Download Models
|
||||
|
||||
ComfyUI needs at least one checkpoint model to generate images.
|
||||
|
||||
**Using comfy-cli:**
|
||||
```bash
|
||||
# SDXL (general purpose, ~6.5GB)
|
||||
comfy model download \
|
||||
--url "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors" \
|
||||
--relative-path models/checkpoints
|
||||
|
||||
# SD 1.5 (lighter, ~4GB, good for low VRAM)
|
||||
comfy model download \
|
||||
--url "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" \
|
||||
--relative-path models/checkpoints
|
||||
|
||||
# From CivitAI (may need API token):
|
||||
comfy model download \
|
||||
--url "https://civitai.com/api/download/models/128713" \
|
||||
--relative-path models/checkpoints \
|
||||
--set-civitai-api-token "YOUR_TOKEN"
|
||||
|
||||
# LoRA adapters:
|
||||
comfy model download --url "<URL>" --relative-path models/loras
|
||||
```
|
||||
|
||||
**Manual download:** Place `.safetensors` / `.ckpt` files directly into the
|
||||
`ComfyUI/models/checkpoints/` directory (or `loras/`, `vae/`, etc.).
|
||||
|
||||
List installed models:
|
||||
```bash
|
||||
comfy model list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Post-Install: Install Custom Nodes
|
||||
|
||||
Custom nodes extend ComfyUI's capabilities (upscaling, video, ControlNet, etc.).
|
||||
|
||||
```bash
|
||||
comfy node install comfyui-impact-pack # popular utility pack
|
||||
comfy node install comfyui-animatediff-evolved # video generation
|
||||
comfy node install comfyui-controlnet-aux # ControlNet preprocessors
|
||||
comfy node install comfyui-essentials # common helpers
|
||||
comfy node update all # update all nodes
|
||||
```
|
||||
|
||||
Check what's installed:
|
||||
```bash
|
||||
comfy node show installed
|
||||
```
|
||||
|
||||
Install deps for a specific workflow:
|
||||
```bash
|
||||
comfy node install-deps --workflow=workflow_api.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Post-Install: Verify Setup
|
||||
|
||||
```bash
|
||||
# Check server is responsive
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
|
||||
# Check a workflow's dependencies
|
||||
python3 scripts/check_deps.py workflow_api.json --host 127.0.0.1 --port 8188
|
||||
|
||||
# Test a generation
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "test image, high quality"}' \
|
||||
--output-dir ./test-outputs
|
||||
```
|
||||
|
||||
## Image Upload (img2img / Inpainting)
|
||||
|
||||
Upload files directly via REST:
|
||||
|
||||
```bash
|
||||
# Upload input image
|
||||
curl -X POST "http://127.0.0.1:8188/upload/image" \
|
||||
-F "image=@photo.png" -F "type=input" -F "overwrite=true"
|
||||
# Returns: {"name": "photo.png", "subfolder": "", "type": "input"}
|
||||
|
||||
# Upload mask for inpainting
|
||||
curl -X POST "http://127.0.0.1:8188/upload/mask" \
|
||||
-F "image=@mask.png" -F "type=input" \
|
||||
-F 'original_ref={"filename":"photo.png","subfolder":"","type":"input"}'
|
||||
```
|
||||
|
||||
Then reference the uploaded filename in workflow args:
|
||||
```bash
|
||||
python3 scripts/run_workflow.py --workflow inpaint.json \
|
||||
--args '{"image": "photo.png", "mask": "mask.png", "prompt": "fill with flowers"}'
|
||||
```
|
||||
|
||||
## Cloud Execution
|
||||
|
||||
Base URL: `https://cloud.comfy.org`
|
||||
Auth: `X-API-Key` header
|
||||
|
||||
```bash
|
||||
# Submit workflow
|
||||
python3 scripts/run_workflow.py \
|
||||
--workflow workflow_api.json \
|
||||
--args '{"prompt": "cyberpunk city"}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key "$COMFY_CLOUD_API_KEY" \
|
||||
--output-dir ./outputs \
|
||||
--timeout 300
|
||||
|
||||
# Upload image for cloud workflows
|
||||
curl -X POST "https://cloud.comfy.org/api/upload/image" \
|
||||
-H "X-API-Key: $COMFY_CLOUD_API_KEY" \
|
||||
-F "image=@input.png" -F "type=input" -F "overwrite=true"
|
||||
```
|
||||
|
||||
Concurrent job limits:
|
||||
| Tier | Concurrent Jobs |
|
||||
|------|----------------|
|
||||
| Free/Standard | 1 |
|
||||
| Creator | 3 |
|
||||
| Pro | 5 |
|
||||
|
||||
Extra submissions queue automatically.
|
||||
|
||||
## Queue & System Management
|
||||
|
||||
```bash
|
||||
# Check queue
|
||||
curl -s http://127.0.0.1:8188/queue | python3 -m json.tool
|
||||
|
||||
# Clear pending queue
|
||||
curl -X POST http://127.0.0.1:8188/queue -d '{"clear": true}'
|
||||
|
||||
# Cancel running job
|
||||
curl -X POST http://127.0.0.1:8188/interrupt
|
||||
|
||||
# Free GPU memory (unload all models)
|
||||
curl -X POST http://127.0.0.1:8188/free -H "Content-Type: application/json" \
|
||||
-d '{"unload_models": true, "free_memory": true}'
|
||||
|
||||
# System stats (VRAM, RAM, GPU info)
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool
|
||||
```
|
||||
|
||||
## Pitfalls
|
||||
|
||||
1. **API format required** — `comfy run` and the scripts only accept API-format workflow JSON.
|
||||
If the user has editor format (from "Save" not "Save (API Format)"), they need to
|
||||
re-export. Check: API format has `class_type` in each node object, editor format has
|
||||
top-level `nodes` and `links` arrays.
|
||||
|
||||
2. **Server must be running** — All execution requires a live server. `comfy launch --background`
|
||||
starts one. Check with `curl http://127.0.0.1:8188/system_stats`.
|
||||
|
||||
3. **Model names are exact** — Case-sensitive, includes file extension. Use
|
||||
`comfy model list` to discover what's installed.
|
||||
|
||||
4. **Missing custom nodes** — "class_type not found" means a required node isn't installed.
|
||||
Run `check_deps.py` to find what's missing, then `comfy node install <name>`.
|
||||
|
||||
5. **Working directory** — `comfy-cli` auto-detects the ComfyUI workspace. If commands
|
||||
fail with "no workspace found", use `comfy --workspace /path/to/ComfyUI <command>`
|
||||
or `comfy set-default /path/to/ComfyUI`.
|
||||
|
||||
6. **Cloud vs local output download** — Cloud `/api/view` returns a 302 redirect to a
|
||||
signed URL. Always follow redirects (`curl -L`). The `run_workflow.py` script handles
|
||||
this automatically.
|
||||
|
||||
7. **Timeout for video/audio** — Long generations (video, high step counts) can take
|
||||
minutes. Pass `--timeout 600` to `run_workflow.py`. Default is 120 seconds.
|
||||
|
||||
8. **tracking prompt** — First run of `comfy` may prompt for analytics tracking consent.
|
||||
Use `comfy --skip-prompt tracking disable` to skip it non-interactively.
|
||||
|
||||
9. **comfy-cli invocation via uvx** — If comfy-cli is not installed globally, invoke with
|
||||
`uvx --from comfy-cli comfy <command>`. All examples in this skill use bare `comfy`
|
||||
but prepend `uvx --from comfy-cli` if needed.
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
- [ ] `comfy` available on PATH (or `uvx --from comfy-cli comfy --help` works)
|
||||
- [ ] `curl http://127.0.0.1:8188/system_stats` returns JSON
|
||||
- [ ] `comfy model list` shows at least one checkpoint
|
||||
- [ ] Workflow JSON is in API format (has `class_type` keys)
|
||||
- [ ] `check_deps.py` reports no missing nodes/models
|
||||
- [ ] Test run completes and outputs are saved
|
||||
268
optional-skills/creative/comfyui/references/official-cli.md
Normal file
268
optional-skills/creative/comfyui/references/official-cli.md
Normal file
@@ -0,0 +1,268 @@
|
||||
# comfy-cli Command Reference
|
||||
|
||||
Official CLI from [Comfy-Org/comfy-cli](https://github.com/Comfy-Org/comfy-cli).
|
||||
Docs: https://docs.comfy.org/comfy-cli/getting-started
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install comfy-cli
|
||||
# or
|
||||
uvx --from comfy-cli comfy --help
|
||||
```
|
||||
|
||||
First run may prompt for analytics. Disable non-interactively:
|
||||
```bash
|
||||
comfy --skip-prompt tracking disable
|
||||
```
|
||||
|
||||
## Global Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--workspace <path>` | Target a specific ComfyUI workspace |
|
||||
| `--recent` | Use most recently used workspace |
|
||||
| `--here` | Use current directory as workspace |
|
||||
| `--skip-prompt` | No interactive prompts (use defaults) |
|
||||
| `-v` / `--version` | Print version |
|
||||
|
||||
Workspace resolution priority:
|
||||
1. `--workspace` (explicit path)
|
||||
2. `--recent` (from config)
|
||||
3. `--here` (cwd)
|
||||
4. `comfy set-default` path
|
||||
5. Most recently used
|
||||
6. `~/comfy/ComfyUI` (Linux) or `~/Documents/comfy/ComfyUI` (macOS)
|
||||
|
||||
## Commands
|
||||
|
||||
### `comfy install`
|
||||
|
||||
Download and install ComfyUI + ComfyUI-Manager.
|
||||
|
||||
```bash
|
||||
comfy install # interactive GPU selection
|
||||
comfy install --nvidia # NVIDIA (CUDA)
|
||||
comfy install --amd # AMD (ROCm)
|
||||
comfy install --m-series # Apple Silicon (MPS)
|
||||
comfy install --cpu # CPU only
|
||||
comfy install --fast-deps # use uv for faster deps
|
||||
comfy install --skip-manager # skip ComfyUI-Manager
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--nvidia` | NVIDIA GPU |
|
||||
| `--amd` | AMD GPU (ROCm) |
|
||||
| `--m-series` | Apple Silicon |
|
||||
| `--cpu` | CPU only |
|
||||
| `--cuda-version` | 11.8, 12.1, 12.4, 12.6, 12.8, 12.9, 13.0 |
|
||||
| `--rocm-version` | 6.1, 6.2, 6.3, 7.0, 7.1 |
|
||||
| `--fast-deps` | Use uv for dependency resolution |
|
||||
| `--skip-manager` | Don't install ComfyUI-Manager |
|
||||
| `--skip-torch-or-directml` | Skip PyTorch install |
|
||||
| `--version <ver>` | Specific ComfyUI version (e.g. `0.2.0`, `latest`, `nightly`) |
|
||||
| `--commit <hash>` | Install specific commit |
|
||||
| `--pr "#1234"` | Install from a PR |
|
||||
| `--restore` | Restore deps for existing install |
|
||||
|
||||
Default location: `~/comfy/ComfyUI` (Linux), `~/Documents/comfy/ComfyUI` (macOS/Win).
|
||||
|
||||
### `comfy launch`
|
||||
|
||||
Start ComfyUI server.
|
||||
|
||||
```bash
|
||||
comfy launch # foreground on :8188
|
||||
comfy launch --background # background daemon
|
||||
comfy launch -- --listen 0.0.0.0 # listen on all interfaces
|
||||
comfy launch -- --port 8190 # custom port
|
||||
comfy launch -- --cpu # force CPU mode
|
||||
comfy launch --background -- --listen 0.0.0.0 --port 8190
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--background` | Run as background daemon |
|
||||
| `--frontend-pr "#456"` | Test a frontend PR |
|
||||
| Extra args after `--` | Passed directly to ComfyUI's `main.py` |
|
||||
|
||||
Common extra args: `--listen`, `--port`, `--cpu`, `--lowvram`, `--novram`,
|
||||
`--fp16-vae`, `--force-fp32`.
|
||||
|
||||
### `comfy stop`
|
||||
|
||||
Stop background ComfyUI instance.
|
||||
|
||||
```bash
|
||||
comfy stop
|
||||
```
|
||||
|
||||
### `comfy run`
|
||||
|
||||
Execute a raw workflow JSON file against a running server.
|
||||
|
||||
```bash
|
||||
comfy run --workflow workflow_api.json
|
||||
comfy run --workflow workflow_api.json --host 10.0.0.5 --port 8188
|
||||
comfy run --workflow workflow_api.json --timeout 300 --wait
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--workflow` | Path to API-format workflow JSON (required) |
|
||||
| `--host` | Server hostname (default: 127.0.0.1) |
|
||||
| `--port` | Server port (default: 8188) |
|
||||
| `--timeout` | Seconds to wait (default: 30) |
|
||||
| `--wait/--no-wait` | Wait for completion (default: wait) |
|
||||
| `--verbose` | Show per-node execution details |
|
||||
|
||||
**Limitations:** No parameter injection, no structured output, no image download.
|
||||
For agent use, prefer `scripts/run_workflow.py` which adds those capabilities.
|
||||
|
||||
### `comfy which`
|
||||
|
||||
Show which ComfyUI workspace is currently targeted.
|
||||
|
||||
```bash
|
||||
comfy which
|
||||
comfy --recent which
|
||||
```
|
||||
|
||||
### `comfy set-default`
|
||||
|
||||
Set the default workspace path.
|
||||
|
||||
```bash
|
||||
comfy set-default /path/to/ComfyUI
|
||||
comfy set-default /path/to/ComfyUI --launch-extras="--listen 0.0.0.0"
|
||||
```
|
||||
|
||||
### `comfy update`
|
||||
|
||||
Update ComfyUI or custom nodes.
|
||||
|
||||
```bash
|
||||
comfy update # update ComfyUI core
|
||||
comfy node update all # update all custom nodes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `comfy node` — Custom Node Management
|
||||
|
||||
All node operations use ComfyUI-Manager (cm-cli) under the hood.
|
||||
|
||||
```bash
|
||||
comfy node show installed # list installed nodes
|
||||
comfy node show enabled # list enabled nodes
|
||||
comfy node show all # all available nodes
|
||||
comfy node simple-show installed # compact list
|
||||
|
||||
comfy node install comfyui-impact-pack # install by name
|
||||
comfy node install <name> --uv-compile # with unified dep resolution (Manager v4.1+)
|
||||
comfy node uninstall <name> # remove
|
||||
comfy node update <name> # update one
|
||||
comfy node update all # update all
|
||||
comfy node enable <name> # enable disabled node
|
||||
comfy node disable <name> # disable without uninstalling
|
||||
comfy node fix <name> # fix broken dependencies
|
||||
|
||||
comfy node install-deps --workflow=workflow.json # install all deps a workflow needs
|
||||
comfy node deps-in-workflow --workflow=w.json --output=deps.json # extract dep list
|
||||
|
||||
comfy node save-snapshot # save current state
|
||||
comfy node restore-snapshot <file> # restore from snapshot
|
||||
|
||||
comfy node bisect start # find culprit node (binary search)
|
||||
comfy node bisect good # current set is fine
|
||||
comfy node bisect bad # problem is in current set
|
||||
comfy node bisect reset # abort bisect
|
||||
```
|
||||
|
||||
### Dependency Resolution Options
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--fast-deps` | comfy-cli built-in uv resolver |
|
||||
| `--uv-compile` | ComfyUI-Manager v4.1+ unified resolver (recommended) |
|
||||
| `--no-deps` | Skip dep installation |
|
||||
|
||||
Set uv-compile as default: `comfy manager uv-compile-default true`
|
||||
|
||||
---
|
||||
|
||||
## `comfy model` — Model Management
|
||||
|
||||
```bash
|
||||
comfy model list # list all downloaded models
|
||||
comfy model list --relative-path models/checkpoints # specific folder
|
||||
|
||||
comfy model download --url <URL> # download model
|
||||
comfy model download --url <URL> --relative-path models/loras
|
||||
comfy model download --url <URL> --filename custom_name.safetensors
|
||||
|
||||
comfy model remove # interactive removal
|
||||
comfy model remove --relative-path models/checkpoints --model-names "model.safetensors"
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--url` | Download URL (CivitAI, HuggingFace, direct) |
|
||||
| `--relative-path` | Subdirectory under workspace (e.g. `models/checkpoints`) |
|
||||
| `--filename` | Custom filename to save as |
|
||||
| `--set-civitai-api-token` | Set CivitAI API token |
|
||||
| `--set-hf-api-token` | Set HuggingFace API token |
|
||||
| `--downloader` | `httpx` (default) or `aria2` |
|
||||
|
||||
Model directory structure:
|
||||
```
|
||||
ComfyUI/models/
|
||||
├── checkpoints/ # Full model files (.safetensors, .ckpt)
|
||||
├── loras/ # LoRA adapters
|
||||
├── vae/ # VAE models
|
||||
├── controlnet/ # ControlNet models
|
||||
├── clip/ # CLIP text encoders
|
||||
├── clip_vision/ # CLIP vision encoders
|
||||
├── upscale_models/ # Upscaler models (ESRGAN, etc.)
|
||||
├── embeddings/ # Textual inversion embeddings
|
||||
├── unet/ # UNet models
|
||||
└── diffusion_models/ # Diffusion model files
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `comfy manager` — ComfyUI-Manager Settings
|
||||
|
||||
```bash
|
||||
comfy manager disable # disable Manager completely
|
||||
comfy manager enable-gui # enable new GUI
|
||||
comfy manager disable-gui # disable GUI (API-only)
|
||||
comfy manager enable-legacy-gui # legacy GUI
|
||||
comfy manager uv-compile-default true # make --uv-compile the default
|
||||
comfy manager clear # clear startup action
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `comfy pr-cache` — Frontend PR Cache
|
||||
|
||||
```bash
|
||||
comfy pr-cache list # list cached PR builds
|
||||
comfy pr-cache clean # clean all
|
||||
comfy pr-cache clean 456 # clean specific PR
|
||||
```
|
||||
|
||||
Cache expires after 7 days; max 10 builds kept.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Config file location:
|
||||
- Linux: `~/.config/comfy-cli/config.ini`
|
||||
- macOS: `~/Library/Application Support/comfy-cli/config.ini`
|
||||
- Windows: `~/AppData/Local/comfy-cli/config.ini`
|
||||
|
||||
Stores: default workspace, recent workspace, background server info, API tokens,
|
||||
manager GUI mode, launch extras.
|
||||
256
optional-skills/creative/comfyui/references/rest-api.md
Normal file
256
optional-skills/creative/comfyui/references/rest-api.md
Normal file
@@ -0,0 +1,256 @@
|
||||
# ComfyUI REST API Reference
|
||||
|
||||
ComfyUI exposes a REST API + WebSocket for workflow execution and management.
|
||||
Same API surface for local servers and Comfy Cloud (with auth differences).
|
||||
|
||||
## Connection
|
||||
|
||||
| | Local | Cloud |
|
||||
|---|---|---|
|
||||
| Base URL | `http://127.0.0.1:8188` | `https://cloud.comfy.org` |
|
||||
| Auth | None (or bearer token) | `X-API-Key` header |
|
||||
| WebSocket | `ws://host:port/ws?clientId={uuid}` | `wss://cloud.comfy.org/ws?clientId={uuid}&token={API_KEY}` |
|
||||
| Output download | Direct bytes from `/view` | 302 redirect → signed URL (use `curl -L`) |
|
||||
|
||||
## Workflow Execution
|
||||
|
||||
### Submit Workflow
|
||||
|
||||
```bash
|
||||
# Local
|
||||
curl -X POST "http://127.0.0.1:8188/prompt" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"prompt": '"$(cat workflow_api.json)"', "client_id": "'"$(uuidgen)"'"}'
|
||||
|
||||
# Cloud
|
||||
curl -X POST "https://cloud.comfy.org/api/prompt" \
|
||||
-H "X-API-Key: $COMFY_CLOUD_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"prompt": '"$(cat workflow_api.json)"'}'
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{"prompt_id": "abc-123-def", "number": 1, "node_errors": {}}
|
||||
```
|
||||
|
||||
If `node_errors` is non-empty, the workflow has validation errors (missing nodes, bad inputs).
|
||||
|
||||
### Check Job Status (Cloud)
|
||||
|
||||
```bash
|
||||
curl -X GET "https://cloud.comfy.org/api/job/{prompt_id}/status" \
|
||||
-H "X-API-Key: $COMFY_CLOUD_API_KEY"
|
||||
```
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `pending` | Queued, waiting to start |
|
||||
| `in_progress` | Currently executing |
|
||||
| `completed` | Finished successfully |
|
||||
| `failed` | Encountered an error |
|
||||
| `cancelled` | Cancelled by user |
|
||||
|
||||
### Get History (Local)
|
||||
|
||||
```bash
|
||||
# All history
|
||||
curl -s "http://127.0.0.1:8188/history"
|
||||
|
||||
# Specific prompt
|
||||
curl -s "http://127.0.0.1:8188/history/{prompt_id}"
|
||||
```
|
||||
|
||||
Response contains `outputs` keyed by node ID with file references.
|
||||
|
||||
### Download Output
|
||||
|
||||
```bash
|
||||
# Local
|
||||
curl -s "http://127.0.0.1:8188/view?filename=ComfyUI_00001_.png&subfolder=&type=output" \
|
||||
-o output.png
|
||||
|
||||
# Cloud (follow redirect)
|
||||
curl -L "https://cloud.comfy.org/api/view?filename=ComfyUI_00001_.png&subfolder=&type=output" \
|
||||
-H "X-API-Key: $COMFY_CLOUD_API_KEY" \
|
||||
-o output.png
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## WebSocket Monitoring
|
||||
|
||||
Connect to WebSocket for real-time execution progress.
|
||||
|
||||
### Connection
|
||||
|
||||
```bash
|
||||
# Local
|
||||
wscat -c "ws://127.0.0.1:8188/ws?clientId=MY-UUID"
|
||||
|
||||
# Cloud
|
||||
wscat -c "wss://cloud.comfy.org/ws?clientId=MY-UUID&token=API_KEY"
|
||||
```
|
||||
|
||||
### Message Types (JSON)
|
||||
|
||||
| Type | When | Key Fields |
|
||||
|------|------|------------|
|
||||
| `status` | Queue change | `queue_remaining` |
|
||||
| `execution_start` | Workflow begins | `prompt_id` |
|
||||
| `executing` | Node running | `node` (ID), `prompt_id` |
|
||||
| `progress` | Sampling steps | `node`, `value`, `max` |
|
||||
| `executed` | Node output ready | `node`, `output` |
|
||||
| `execution_cached` | Nodes skipped | `nodes` (list of IDs) |
|
||||
| `execution_success` | All done | `prompt_id` |
|
||||
| `execution_error` | Failure | `exception_type`, `exception_message`, `traceback` |
|
||||
| `execution_interrupted` | Cancelled | `prompt_id` |
|
||||
|
||||
When `executing` has `node: null`, the workflow is complete.
|
||||
|
||||
### Binary Messages (Preview Images)
|
||||
|
||||
Format: `[4B type][4B image_type: 1=JPEG, 2=PNG][image_data...]`
|
||||
|
||||
---
|
||||
|
||||
## File Upload
|
||||
|
||||
### Upload Image
|
||||
|
||||
```bash
|
||||
curl -X POST "http://127.0.0.1:8188/upload/image" \
|
||||
-F "image=@photo.png" \
|
||||
-F "type=input" \
|
||||
-F "overwrite=true"
|
||||
```
|
||||
|
||||
Response: `{"name": "photo.png", "subfolder": "", "type": "input"}`
|
||||
|
||||
### Upload Mask
|
||||
|
||||
```bash
|
||||
curl -X POST "http://127.0.0.1:8188/upload/mask" \
|
||||
-F "image=@mask.png" \
|
||||
-F "type=input" \
|
||||
-F 'original_ref={"filename":"photo.png","subfolder":"","type":"input"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Node & Model Discovery
|
||||
|
||||
### Object Info (All Nodes)
|
||||
|
||||
```bash
|
||||
curl -s "http://127.0.0.1:8188/object_info" | python3 -m json.tool
|
||||
# Returns all node types with input/output definitions
|
||||
|
||||
curl -s "http://127.0.0.1:8188/object_info/KSampler"
|
||||
# Returns info for one specific node type
|
||||
```
|
||||
|
||||
### Models by Folder
|
||||
|
||||
```bash
|
||||
curl -s "http://127.0.0.1:8188/models/checkpoints"
|
||||
curl -s "http://127.0.0.1:8188/models/loras"
|
||||
curl -s "http://127.0.0.1:8188/models/vae"
|
||||
curl -s "http://127.0.0.1:8188/models/controlnet"
|
||||
curl -s "http://127.0.0.1:8188/models/clip"
|
||||
curl -s "http://127.0.0.1:8188/models/upscale_models"
|
||||
curl -s "http://127.0.0.1:8188/models/embeddings"
|
||||
```
|
||||
|
||||
Returns arrays of filenames (relative to model folder).
|
||||
|
||||
---
|
||||
|
||||
## Queue Management
|
||||
|
||||
```bash
|
||||
# View queue (running + pending)
|
||||
curl -s "http://127.0.0.1:8188/queue"
|
||||
|
||||
# Clear all pending
|
||||
curl -X POST "http://127.0.0.1:8188/queue" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"clear": true}'
|
||||
|
||||
# Delete specific items from queue
|
||||
curl -X POST "http://127.0.0.1:8188/queue" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"delete": ["prompt_id_1", "prompt_id_2"]}'
|
||||
|
||||
# Cancel currently running job
|
||||
curl -X POST "http://127.0.0.1:8188/interrupt"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## System Management
|
||||
|
||||
```bash
|
||||
# System stats (VRAM, RAM, GPU, versions)
|
||||
curl -s "http://127.0.0.1:8188/system_stats"
|
||||
|
||||
# Free GPU memory
|
||||
curl -X POST "http://127.0.0.1:8188/free" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"unload_models": true, "free_memory": true}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ComfyUI Manager Endpoints (Optional)
|
||||
|
||||
These require ComfyUI-Manager installed.
|
||||
|
||||
```bash
|
||||
# Install custom node from git repo
|
||||
curl -X POST "http://127.0.0.1:8188/manager/queue/install" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"git_url": "https://github.com/user/comfyui-node.git"}'
|
||||
|
||||
# Check install queue status
|
||||
curl -s "http://127.0.0.1:8188/manager/queue/status"
|
||||
|
||||
# Install model
|
||||
curl -X POST "http://127.0.0.1:8188/manager/queue/install_model" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"url": "https://...", "path": "models/checkpoints", "filename": "model.safetensors"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## POST /prompt Payload Format
|
||||
|
||||
```json
|
||||
{
|
||||
"prompt": {
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"inputs": {
|
||||
"seed": 42,
|
||||
"steps": 20,
|
||||
"cfg": 7.5,
|
||||
"sampler_name": "euler",
|
||||
"scheduler": "normal",
|
||||
"denoise": 1.0,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["5", 0]
|
||||
}
|
||||
}
|
||||
},
|
||||
"client_id": "unique-uuid-for-ws-filtering",
|
||||
"extra_data": {
|
||||
"api_key_comfy_org": "optional-partner-node-key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `prompt`: The workflow graph (API format)
|
||||
- `client_id`: UUID for WebSocket event filtering
|
||||
- `extra_data.api_key_comfy_org`: Required for paid partner nodes (Flux Pro, Ideogram, etc.)
|
||||
218
optional-skills/creative/comfyui/references/workflow-format.md
Normal file
218
optional-skills/creative/comfyui/references/workflow-format.md
Normal file
@@ -0,0 +1,218 @@
|
||||
# ComfyUI Workflow JSON Format
|
||||
|
||||
## Two Formats
|
||||
|
||||
ComfyUI uses two workflow formats. **Only API format works for programmatic execution.**
|
||||
|
||||
### API Format (what we use)
|
||||
|
||||
Top-level keys are string node IDs. Each node has `class_type` and `inputs`:
|
||||
|
||||
```json
|
||||
{
|
||||
"3": {
|
||||
"class_type": "KSampler",
|
||||
"inputs": {
|
||||
"seed": 156680208700286,
|
||||
"steps": 20,
|
||||
"cfg": 8,
|
||||
"sampler_name": "euler",
|
||||
"scheduler": "normal",
|
||||
"denoise": 1.0,
|
||||
"model": ["4", 0],
|
||||
"positive": ["6", 0],
|
||||
"negative": ["7", 0],
|
||||
"latent_image": ["5", 0]
|
||||
},
|
||||
"_meta": {"title": "KSampler"}
|
||||
},
|
||||
"4": {
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"inputs": {
|
||||
"ckpt_name": "v1-5-pruned-emaonly.safetensors"
|
||||
}
|
||||
},
|
||||
"5": {
|
||||
"class_type": "EmptyLatentImage",
|
||||
"inputs": {"width": 512, "height": 512, "batch_size": 1}
|
||||
},
|
||||
"6": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"inputs": {
|
||||
"text": "a beautiful cat",
|
||||
"clip": ["4", 1]
|
||||
}
|
||||
},
|
||||
"7": {
|
||||
"class_type": "CLIPTextEncode",
|
||||
"inputs": {
|
||||
"text": "bad quality, ugly",
|
||||
"clip": ["4", 1]
|
||||
}
|
||||
},
|
||||
"9": {
|
||||
"class_type": "SaveImage",
|
||||
"inputs": {
|
||||
"filename_prefix": "ComfyUI",
|
||||
"images": ["8", 0]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**How to detect:** Top-level keys are numeric strings, each value has `class_type`.
|
||||
|
||||
### Editor Format (not directly executable)
|
||||
|
||||
Has `nodes[]` and `links[]` arrays — the visual graph data from the ComfyUI web editor.
|
||||
This is what "Save" produces. For API use, export with "Save (API Format)" instead.
|
||||
|
||||
**How to detect:** Top-level has `"nodes"` and `"links"` keys.
|
||||
|
||||
---
|
||||
|
||||
## Input Connections
|
||||
|
||||
Inputs can be:
|
||||
- **Literal values**: `"text": "a cat"`, `"seed": 42`, `"width": 512`
|
||||
- **Links to other nodes**: `["node_id", output_index]` — e.g., `["4", 0]` means
|
||||
output slot 0 of node "4"
|
||||
|
||||
Only literal values can be modified by parameter injection. Linked inputs are wiring.
|
||||
|
||||
---
|
||||
|
||||
## Common Node Types and Their Controllable Parameters
|
||||
|
||||
### Text Prompts
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `CLIPTextEncode` | `text` (the prompt string) |
|
||||
| `CLIPTextEncodeSDXL` | `text_g`, `text_l`, `width`, `height` |
|
||||
|
||||
Usually: positive prompt → one CLIPTextEncode, negative prompt → another.
|
||||
Distinguish by checking the `_meta.title` field or by tracing which feeds into
|
||||
positive vs negative inputs of the sampler.
|
||||
|
||||
### Sampling
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `KSampler` | `seed`, `steps`, `cfg`, `sampler_name`, `scheduler`, `denoise` |
|
||||
| `KSamplerAdvanced` | `noise_seed`, `steps`, `cfg`, `sampler_name`, `scheduler`, `start_at_step`, `end_at_step` |
|
||||
| `SamplerCustom` | `cfg`, `sampler`, `sigmas` |
|
||||
|
||||
### Image Dimensions
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `EmptyLatentImage` | `width`, `height`, `batch_size` |
|
||||
| `LatentUpscale` | `width`, `height`, `upscale_method` |
|
||||
|
||||
### Model Loading
|
||||
|
||||
| Node Class | Key Fields | Model Folder |
|
||||
|------------|-----------|-------------|
|
||||
| `CheckpointLoaderSimple` | `ckpt_name` | `checkpoints` |
|
||||
| `LoraLoader` | `lora_name`, `strength_model`, `strength_clip` | `loras` |
|
||||
| `VAELoader` | `vae_name` | `vae` |
|
||||
| `ControlNetLoader` | `control_net_name` | `controlnet` |
|
||||
| `CLIPLoader` | `clip_name` | `clip` |
|
||||
| `UNETLoader` | `unet_name` | `unet` |
|
||||
| `DiffusionModelLoader` | `model_name` | `diffusion_models` |
|
||||
| `UpscaleModelLoader` | `model_name` | `upscale_models` |
|
||||
|
||||
### Image Input/Output
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `LoadImage` | `image` (filename on server, after upload) |
|
||||
| `LoadImageMask` | `image`, `channel` |
|
||||
| `SaveImage` | `filename_prefix` |
|
||||
| `PreviewImage` | (no controllable fields, just previews) |
|
||||
|
||||
### ControlNet
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `ControlNetApply` | `strength` |
|
||||
| `ControlNetApplyAdvanced` | `strength`, `start_percent`, `end_percent` |
|
||||
|
||||
### Video (AnimateDiff)
|
||||
|
||||
| Node Class | Key Fields |
|
||||
|------------|-----------|
|
||||
| `ADE_AnimateDiffLoaderWithContext` | `model_name`, `motion_scale` |
|
||||
| `VHS_VideoCombine` | `frame_rate`, `format`, `filename_prefix` |
|
||||
|
||||
---
|
||||
|
||||
## Parameter Injection Pattern
|
||||
|
||||
To modify a workflow programmatically:
|
||||
|
||||
```python
|
||||
import json, copy
|
||||
|
||||
with open("workflow_api.json") as f:
|
||||
workflow = json.load(f)
|
||||
|
||||
# Deep copy to avoid mutating original
|
||||
wf = copy.deepcopy(workflow)
|
||||
|
||||
# Inject parameters by node ID + field name
|
||||
wf["6"]["inputs"]["text"] = "a beautiful sunset" # positive prompt
|
||||
wf["7"]["inputs"]["text"] = "ugly, blurry" # negative prompt
|
||||
wf["3"]["inputs"]["seed"] = 42 # seed
|
||||
wf["3"]["inputs"]["steps"] = 30 # steps
|
||||
wf["5"]["inputs"]["width"] = 1024 # width
|
||||
wf["5"]["inputs"]["height"] = 1024 # height
|
||||
```
|
||||
|
||||
The `scripts/extract_schema.py` in this skill automates discovering which
|
||||
node IDs and fields correspond to which user-facing parameters.
|
||||
|
||||
---
|
||||
|
||||
## Identifying Controllable Parameters (Heuristics)
|
||||
|
||||
When analyzing an unknown workflow, these patterns identify user-facing params:
|
||||
|
||||
1. **Prompt text**: Any `CLIPTextEncode` → `text` field. Title/meta usually
|
||||
indicates positive vs negative.
|
||||
|
||||
2. **Seed**: Any `KSampler` / `KSamplerAdvanced` → `seed` / `noise_seed`.
|
||||
Randomizable — set to different values for variations.
|
||||
|
||||
3. **Dimensions**: `EmptyLatentImage` → `width`, `height`. Common: 512, 768,
|
||||
1024 (must be multiples of 8).
|
||||
|
||||
4. **Steps**: `KSampler` → `steps`. More = higher quality + slower. 20-50 typical.
|
||||
|
||||
5. **CFG scale**: `KSampler` → `cfg`. How closely to follow prompt. 5-15 typical.
|
||||
|
||||
6. **Model/checkpoint**: `CheckpointLoaderSimple` → `ckpt_name`. Must match an
|
||||
installed model filename exactly.
|
||||
|
||||
7. **LoRA**: `LoraLoader` → `lora_name`, `strength_model`. Adapter name + weight.
|
||||
|
||||
8. **Images for img2img**: `LoadImage` → `image`. Filename on server after upload.
|
||||
|
||||
9. **Denoise strength**: `KSampler` → `denoise`. 0.0-1.0. Lower = closer to input
|
||||
image. Only relevant for img2img.
|
||||
|
||||
---
|
||||
|
||||
## Output Nodes
|
||||
|
||||
Output is produced by these node types:
|
||||
|
||||
| Node | Output Key | Content |
|
||||
|------|-----------|---------|
|
||||
| `SaveImage` | `images` | List of `{filename, subfolder, type}` |
|
||||
| `VHS_VideoCombine` | `gifs` or `videos` | Video file references |
|
||||
| `SaveAudio` | `audio` | Audio file references |
|
||||
| `PreviewImage` | `images` | Temporary preview (not saved) |
|
||||
|
||||
After execution, fetch outputs from `/history/{prompt_id}` → `outputs` → `{node_id}`.
|
||||
182
optional-skills/creative/comfyui/scripts/check_deps.py
Normal file
182
optional-skills/creative/comfyui/scripts/check_deps.py
Normal file
@@ -0,0 +1,182 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
check_deps.py — Check if a ComfyUI workflow's dependencies (custom nodes and models) are installed.
|
||||
|
||||
Queries the running ComfyUI server for installed nodes (via /object_info) and models
|
||||
(via /models/{folder}), then diffs against what the workflow requires.
|
||||
|
||||
Usage:
|
||||
python3 check_deps.py workflow_api.json
|
||||
python3 check_deps.py workflow_api.json --host 127.0.0.1 --port 8188
|
||||
python3 check_deps.py workflow_api.json --host https://cloud.comfy.org --api-key KEY
|
||||
|
||||
Output format:
|
||||
{
|
||||
"is_ready": true/false,
|
||||
"missing_nodes": ["NodeClassName", ...],
|
||||
"missing_models": [{"class_type": "...", "field": "...", "value": "...", "folder": "..."}],
|
||||
"installed_nodes_count": 123,
|
||||
"required_nodes": ["KSampler", "CLIPTextEncode", ...]
|
||||
}
|
||||
|
||||
Requires: Python 3.10+, requests (or urllib as fallback)
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from urllib.parse import urljoin, urlparse
|
||||
|
||||
try:
|
||||
import requests
|
||||
HAS_REQUESTS = True
|
||||
except ImportError:
|
||||
HAS_REQUESTS = False
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
# Known model loader node types and which folder they reference
|
||||
MODEL_LOADERS = {
|
||||
"CheckpointLoaderSimple": ("ckpt_name", "checkpoints"),
|
||||
"CheckpointLoader": ("ckpt_name", "checkpoints"),
|
||||
"unCLIPCheckpointLoader": ("ckpt_name", "checkpoints"),
|
||||
"LoraLoader": ("lora_name", "loras"),
|
||||
"LoraLoaderModelOnly": ("lora_name", "loras"),
|
||||
"VAELoader": ("vae_name", "vae"),
|
||||
"ControlNetLoader": ("control_net_name", "controlnet"),
|
||||
"DiffControlNetLoader": ("control_net_name", "controlnet"),
|
||||
"CLIPLoader": ("clip_name", "clip"),
|
||||
"DualCLIPLoader": ("clip_name1", "clip"),
|
||||
"UNETLoader": ("unet_name", "unet"),
|
||||
"DiffusionModelLoader": ("model_name", "diffusion_models"),
|
||||
"UpscaleModelLoader": ("model_name", "upscale_models"),
|
||||
"CLIPVisionLoader": ("clip_name", "clip_vision"),
|
||||
"StyleModelLoader": ("style_model_name", "style_models"),
|
||||
"GLIGENLoader": ("gligen_name", "gligen"),
|
||||
"HypernetworkLoader": ("hypernetwork_name", "hypernetworks"),
|
||||
}
|
||||
|
||||
|
||||
def http_get(url: str, headers: dict = None) -> tuple:
|
||||
"""GET request, returns (status_code, body_text)."""
|
||||
if HAS_REQUESTS:
|
||||
r = requests.get(url, headers=headers or {}, timeout=30)
|
||||
return r.status_code, r.text
|
||||
else:
|
||||
req = urllib.request.Request(url, headers=headers or {})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=30)
|
||||
return resp.status, resp.read().decode()
|
||||
except urllib.error.HTTPError as e:
|
||||
return e.code, e.read().decode()
|
||||
|
||||
|
||||
def check_deps(workflow_path: str, host: str = "http://127.0.0.1:8188", api_key: str = None):
|
||||
"""Check workflow dependencies against a running server."""
|
||||
# Load workflow
|
||||
with open(workflow_path) as f:
|
||||
workflow = json.load(f)
|
||||
|
||||
# Validate format
|
||||
if "nodes" in workflow and "links" in workflow:
|
||||
return {"error": "Workflow is in editor format, not API format."}
|
||||
|
||||
headers = {}
|
||||
if api_key:
|
||||
headers["X-API-Key"] = api_key
|
||||
|
||||
parsed_host = urlparse(host)
|
||||
hostname = (parsed_host.hostname or "").lower()
|
||||
is_cloud_host = hostname == "cloud.comfy.org" or hostname.endswith(".cloud.comfy.org")
|
||||
is_cloud = is_cloud_host or api_key is not None
|
||||
base = host.rstrip("/")
|
||||
|
||||
# Get installed node types
|
||||
object_info_url = f"{base}/api/object_info" if is_cloud else f"{base}/object_info"
|
||||
status, body = http_get(object_info_url, headers)
|
||||
if status != 200:
|
||||
return {"error": f"Cannot reach server at {host}. Is ComfyUI running? HTTP {status}"}
|
||||
|
||||
installed_nodes = set(json.loads(body).keys())
|
||||
|
||||
# Find required node types from workflow
|
||||
required_nodes = set()
|
||||
for node_id, node in workflow.items():
|
||||
if isinstance(node, dict) and "class_type" in node:
|
||||
required_nodes.add(node["class_type"])
|
||||
|
||||
missing_nodes = sorted(required_nodes - installed_nodes)
|
||||
|
||||
# Check model dependencies
|
||||
missing_models = []
|
||||
model_cache = {} # folder → set of installed model filenames
|
||||
|
||||
for node_id, node in workflow.items():
|
||||
if not isinstance(node, dict) or "class_type" not in node:
|
||||
continue
|
||||
class_type = node["class_type"]
|
||||
if class_type not in MODEL_LOADERS:
|
||||
continue
|
||||
|
||||
field, folder = MODEL_LOADERS[class_type]
|
||||
inputs = node.get("inputs", {})
|
||||
model_name = inputs.get(field)
|
||||
|
||||
if not model_name or not isinstance(model_name, str):
|
||||
continue
|
||||
|
||||
# Fetch installed models for this folder (cached)
|
||||
if folder not in model_cache:
|
||||
models_url = f"{base}/api/models/{folder}" if is_cloud else f"{base}/models/{folder}"
|
||||
s, b = http_get(models_url, headers)
|
||||
if s == 200:
|
||||
model_cache[folder] = set(json.loads(b))
|
||||
else:
|
||||
model_cache[folder] = set()
|
||||
|
||||
if model_name not in model_cache[folder]:
|
||||
missing_models.append({
|
||||
"node_id": node_id,
|
||||
"class_type": class_type,
|
||||
"field": field,
|
||||
"value": model_name,
|
||||
"folder": folder,
|
||||
})
|
||||
|
||||
is_ready = len(missing_nodes) == 0 and len(missing_models) == 0
|
||||
|
||||
return {
|
||||
"is_ready": is_ready,
|
||||
"missing_nodes": missing_nodes,
|
||||
"missing_models": missing_models,
|
||||
"installed_nodes_count": len(installed_nodes),
|
||||
"required_nodes": sorted(required_nodes),
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Check ComfyUI workflow dependencies")
|
||||
parser.add_argument("workflow", help="Path to workflow API JSON file")
|
||||
parser.add_argument("--host", default="http://127.0.0.1:8188", help="ComfyUI server URL")
|
||||
parser.add_argument("--port", type=int, help="Server port (overrides --host port)")
|
||||
parser.add_argument("--api-key", help="API key for cloud")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Handle --port override
|
||||
host = args.host
|
||||
if args.port and ":" not in host.split("//")[-1]:
|
||||
host = f"{host}:{args.port}"
|
||||
|
||||
result = check_deps(args.workflow, host=host, api_key=args.api_key)
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
if result.get("error"):
|
||||
sys.exit(1)
|
||||
if not result.get("is_ready", False):
|
||||
sys.exit(1)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
77
optional-skills/creative/comfyui/scripts/comfyui_setup.sh
Executable file
77
optional-skills/creative/comfyui/scripts/comfyui_setup.sh
Executable file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env bash
|
||||
# ComfyUI Setup — Install, launch, and verify using the official comfy-cli.
|
||||
# Usage: bash scripts/comfyui_setup.sh [--nvidia|--amd|--m-series|--cpu]
|
||||
#
|
||||
# Prerequisites: Python 3.10+, pip
|
||||
# What it does:
|
||||
# 1. Installs comfy-cli (if not present)
|
||||
# 2. Disables analytics tracking
|
||||
# 3. Installs ComfyUI + ComfyUI-Manager
|
||||
# 4. Launches server in background
|
||||
# 5. Verifies server is reachable
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
GPU_FLAG="${1:---nvidia}" # Default to NVIDIA
|
||||
|
||||
echo "==> ComfyUI Setup"
|
||||
echo " GPU flag: $GPU_FLAG"
|
||||
echo ""
|
||||
|
||||
# Step 1: Install comfy-cli
|
||||
if command -v comfy >/dev/null 2>&1; then
|
||||
echo "==> comfy-cli already installed: $(comfy -v 2>/dev/null || echo 'unknown version')"
|
||||
else
|
||||
echo "==> Installing comfy-cli..."
|
||||
pip install comfy-cli
|
||||
fi
|
||||
|
||||
# Step 2: Disable tracking (avoid interactive prompt)
|
||||
echo "==> Disabling analytics tracking..."
|
||||
comfy --skip-prompt tracking disable 2>/dev/null || true
|
||||
|
||||
# Step 3: Install ComfyUI
|
||||
if comfy which 2>/dev/null | grep -q "ComfyUI"; then
|
||||
echo "==> ComfyUI already installed at: $(comfy which 2>/dev/null)"
|
||||
else
|
||||
echo "==> Installing ComfyUI ($GPU_FLAG)..."
|
||||
comfy --skip-prompt install $GPU_FLAG
|
||||
fi
|
||||
|
||||
# Step 4: Launch in background
|
||||
echo "==> Launching ComfyUI in background..."
|
||||
comfy launch --background 2>/dev/null || {
|
||||
echo "==> Background launch failed. Trying foreground check..."
|
||||
echo " You may need to run: comfy launch"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Step 5: Wait for server to be ready
|
||||
echo "==> Waiting for server..."
|
||||
MAX_WAIT=30
|
||||
ELAPSED=0
|
||||
while [ $ELAPSED -lt $MAX_WAIT ]; do
|
||||
if curl -s http://127.0.0.1:8188/system_stats >/dev/null 2>&1; then
|
||||
echo "==> Server is running!"
|
||||
curl -s http://127.0.0.1:8188/system_stats | python3 -m json.tool 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
ELAPSED=$((ELAPSED + 2))
|
||||
done
|
||||
|
||||
if [ $ELAPSED -ge $MAX_WAIT ]; then
|
||||
echo "==> Server did not start within ${MAX_WAIT}s."
|
||||
echo " Check logs with: comfy launch (foreground) to see errors."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "==> Setup complete!"
|
||||
echo " Server: http://127.0.0.1:8188"
|
||||
echo " Web UI: http://127.0.0.1:8188 (open in browser)"
|
||||
echo " Stop: comfy stop"
|
||||
echo ""
|
||||
echo " Next steps:"
|
||||
echo " - Download a model: comfy model download --url <URL> --relative-path models/checkpoints"
|
||||
echo " - Run a workflow: python3 scripts/run_workflow.py --workflow <file.json> --args '{...}'"
|
||||
212
optional-skills/creative/comfyui/scripts/extract_schema.py
Normal file
212
optional-skills/creative/comfyui/scripts/extract_schema.py
Normal file
@@ -0,0 +1,212 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
extract_schema.py — Analyze a ComfyUI API-format workflow and extract controllable parameters.
|
||||
|
||||
Reads a workflow JSON, identifies user-facing parameters (prompts, seed, dimensions, etc.)
|
||||
by scanning node types and field names, and outputs a schema mapping.
|
||||
|
||||
Usage:
|
||||
python3 extract_schema.py workflow_api.json
|
||||
python3 extract_schema.py workflow_api.json --output schema.json
|
||||
|
||||
Output format:
|
||||
{
|
||||
"parameters": {
|
||||
"prompt": {"node_id": "6", "field": "text", "type": "string", "value": "..."},
|
||||
"seed": {"node_id": "3", "field": "seed", "type": "int", "value": 42},
|
||||
...
|
||||
},
|
||||
"output_nodes": ["9"],
|
||||
"model_dependencies": [
|
||||
{"node_id": "4", "class_type": "CheckpointLoaderSimple", "field": "ckpt_name", "value": "..."}
|
||||
]
|
||||
}
|
||||
|
||||
Requires: Python 3.10+ (stdlib only)
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
|
||||
# Known parameter patterns: (class_type, field_name) → friendly_name
|
||||
PARAM_PATTERNS = [
|
||||
# Prompts
|
||||
("CLIPTextEncode", "text", "prompt"),
|
||||
("CLIPTextEncodeSDXL", "text_g", "prompt"),
|
||||
("CLIPTextEncodeSDXL", "text_l", "prompt_l"),
|
||||
# Sampling
|
||||
("KSampler", "seed", "seed"),
|
||||
("KSampler", "steps", "steps"),
|
||||
("KSampler", "cfg", "cfg"),
|
||||
("KSampler", "sampler_name", "sampler_name"),
|
||||
("KSampler", "scheduler", "scheduler"),
|
||||
("KSampler", "denoise", "denoise"),
|
||||
("KSamplerAdvanced", "noise_seed", "seed"),
|
||||
("KSamplerAdvanced", "steps", "steps"),
|
||||
("KSamplerAdvanced", "cfg", "cfg"),
|
||||
("KSamplerAdvanced", "sampler_name", "sampler_name"),
|
||||
("KSamplerAdvanced", "scheduler", "scheduler"),
|
||||
# Dimensions
|
||||
("EmptyLatentImage", "width", "width"),
|
||||
("EmptyLatentImage", "height", "height"),
|
||||
("EmptyLatentImage", "batch_size", "batch_size"),
|
||||
# Image input
|
||||
("LoadImage", "image", "image"),
|
||||
("LoadImageMask", "image", "mask_image"),
|
||||
# LoRA
|
||||
("LoraLoader", "lora_name", "lora_name"),
|
||||
("LoraLoader", "strength_model", "lora_strength"),
|
||||
# Output
|
||||
("SaveImage", "filename_prefix", "filename_prefix"),
|
||||
]
|
||||
|
||||
# Node types that produce output files
|
||||
OUTPUT_NODES = {"SaveImage", "PreviewImage", "VHS_VideoCombine", "SaveAudio", "SaveAnimatedWEBP", "SaveAnimatedPNG"}
|
||||
|
||||
# Node types that load models (for dependency checking)
|
||||
MODEL_LOADERS = {
|
||||
"CheckpointLoaderSimple": ("ckpt_name", "checkpoints"),
|
||||
"CheckpointLoader": ("ckpt_name", "checkpoints"),
|
||||
"LoraLoader": ("lora_name", "loras"),
|
||||
"LoraLoaderModelOnly": ("lora_name", "loras"),
|
||||
"VAELoader": ("vae_name", "vae"),
|
||||
"ControlNetLoader": ("control_net_name", "controlnet"),
|
||||
"CLIPLoader": ("clip_name", "clip"),
|
||||
"DualCLIPLoader": ("clip_name1", "clip"),
|
||||
"UNETLoader": ("unet_name", "unet"),
|
||||
"DiffusionModelLoader": ("model_name", "diffusion_models"),
|
||||
"UpscaleModelLoader": ("model_name", "upscale_models"),
|
||||
"CLIPVisionLoader": ("clip_name", "clip_vision"),
|
||||
}
|
||||
|
||||
|
||||
def validate_api_format(workflow: dict) -> bool:
|
||||
"""Check if workflow is in API format (not editor format)."""
|
||||
if "nodes" in workflow and "links" in workflow:
|
||||
return False
|
||||
# API format: top-level keys are node IDs, each has class_type
|
||||
for node_id, node in workflow.items():
|
||||
if isinstance(node, dict) and "class_type" in node:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def infer_type(value) -> str:
|
||||
"""Infer JSON schema type from a Python value."""
|
||||
if isinstance(value, bool):
|
||||
return "bool"
|
||||
if isinstance(value, int):
|
||||
return "int"
|
||||
if isinstance(value, float):
|
||||
return "float"
|
||||
if isinstance(value, str):
|
||||
return "string"
|
||||
if isinstance(value, list):
|
||||
return "link" # connections to other nodes
|
||||
return "unknown"
|
||||
|
||||
|
||||
def extract_schema(workflow: dict) -> dict:
|
||||
"""Extract controllable parameters from a workflow."""
|
||||
parameters = {}
|
||||
output_nodes = []
|
||||
model_deps = []
|
||||
name_counts = {} # track duplicate friendly names
|
||||
|
||||
for node_id, node in workflow.items():
|
||||
if not isinstance(node, dict) or "class_type" not in node:
|
||||
continue
|
||||
|
||||
class_type = node["class_type"]
|
||||
inputs = node.get("inputs", {})
|
||||
meta_title = node.get("_meta", {}).get("title", "")
|
||||
|
||||
# Check if this is an output node
|
||||
if class_type in OUTPUT_NODES:
|
||||
output_nodes.append(node_id)
|
||||
|
||||
# Check if this is a model loader
|
||||
if class_type in MODEL_LOADERS:
|
||||
field, folder = MODEL_LOADERS[class_type]
|
||||
if field in inputs and isinstance(inputs[field], str):
|
||||
model_deps.append({
|
||||
"node_id": node_id,
|
||||
"class_type": class_type,
|
||||
"field": field,
|
||||
"value": inputs[field],
|
||||
"folder": folder,
|
||||
})
|
||||
|
||||
# Extract controllable parameters
|
||||
for pattern_class, pattern_field, friendly_name in PARAM_PATTERNS:
|
||||
if class_type != pattern_class:
|
||||
continue
|
||||
if pattern_field not in inputs:
|
||||
continue
|
||||
value = inputs[pattern_field]
|
||||
val_type = infer_type(value)
|
||||
if val_type == "link":
|
||||
continue # skip linked inputs — not directly controllable
|
||||
|
||||
# Disambiguate duplicate friendly names
|
||||
# Use title hint for prompt fields
|
||||
actual_name = friendly_name
|
||||
if friendly_name == "prompt" and meta_title:
|
||||
title_lower = meta_title.lower()
|
||||
if "negative" in title_lower or "neg" in title_lower:
|
||||
actual_name = "negative_prompt"
|
||||
|
||||
# Handle remaining duplicates by appending node_id
|
||||
if actual_name in name_counts:
|
||||
name_counts[actual_name] += 1
|
||||
actual_name = f"{actual_name}_{node_id}"
|
||||
else:
|
||||
name_counts[actual_name] = 1
|
||||
|
||||
parameters[actual_name] = {
|
||||
"node_id": node_id,
|
||||
"field": pattern_field,
|
||||
"type": val_type,
|
||||
"value": value,
|
||||
}
|
||||
|
||||
return {
|
||||
"parameters": parameters,
|
||||
"output_nodes": output_nodes,
|
||||
"model_dependencies": model_deps,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Extract controllable parameters from a ComfyUI workflow")
|
||||
parser.add_argument("workflow", help="Path to workflow API JSON file")
|
||||
parser.add_argument("--output", "-o", help="Output file (default: stdout)")
|
||||
args = parser.parse_args()
|
||||
|
||||
workflow_path = Path(args.workflow)
|
||||
if not workflow_path.exists():
|
||||
print(f"Error: {workflow_path} not found", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
with open(workflow_path) as f:
|
||||
workflow = json.load(f)
|
||||
|
||||
if not validate_api_format(workflow):
|
||||
print("Error: Workflow is in editor format, not API format.", file=sys.stderr)
|
||||
print("Re-export from ComfyUI using 'Save (API Format)' button.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
schema = extract_schema(workflow)
|
||||
|
||||
output_json = json.dumps(schema, indent=2)
|
||||
if args.output:
|
||||
Path(args.output).write_text(output_json)
|
||||
print(f"Schema written to {args.output}", file=sys.stderr)
|
||||
else:
|
||||
print(output_json)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
353
optional-skills/creative/comfyui/scripts/run_workflow.py
Normal file
353
optional-skills/creative/comfyui/scripts/run_workflow.py
Normal file
@@ -0,0 +1,353 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
run_workflow.py — Inject parameters into a ComfyUI workflow, submit it, monitor execution,
|
||||
and download outputs.
|
||||
|
||||
Usage:
|
||||
# Local server
|
||||
python3 run_workflow.py --workflow workflow_api.json \
|
||||
--args '{"prompt": "a cat", "seed": 42}' \
|
||||
--output-dir ./outputs
|
||||
|
||||
# Cloud server
|
||||
python3 run_workflow.py --workflow workflow_api.json \
|
||||
--args '{"prompt": "a cat"}' \
|
||||
--host https://cloud.comfy.org \
|
||||
--api-key comfyui-xxxxxxx \
|
||||
--output-dir ./outputs
|
||||
|
||||
# With schema file (pre-extracted)
|
||||
python3 run_workflow.py --workflow workflow_api.json \
|
||||
--schema schema.json \
|
||||
--args '{"prompt": "a cat"}' \
|
||||
--output-dir ./outputs
|
||||
|
||||
Requires: Python 3.10+, requests (or urllib as fallback)
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
import copy
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from urllib.parse import urljoin, urlencode, urlparse
|
||||
|
||||
try:
|
||||
import requests
|
||||
HAS_REQUESTS = True
|
||||
except ImportError:
|
||||
HAS_REQUESTS = False
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
|
||||
def http_get(url: str, headers: dict = None, follow_redirects: bool = True) -> tuple:
|
||||
"""GET request, returns (status_code, body_bytes, response_headers)."""
|
||||
if HAS_REQUESTS:
|
||||
r = requests.get(url, headers=headers or {}, allow_redirects=follow_redirects, timeout=30)
|
||||
return r.status_code, r.content, dict(r.headers)
|
||||
else:
|
||||
req = urllib.request.Request(url, headers=headers or {})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=30)
|
||||
return resp.status, resp.read(), dict(resp.headers)
|
||||
except urllib.error.HTTPError as e:
|
||||
return e.code, e.read(), dict(e.headers)
|
||||
|
||||
|
||||
def http_post(url: str, data: dict, headers: dict = None) -> tuple:
|
||||
"""POST JSON request, returns (status_code, response_dict)."""
|
||||
payload = json.dumps(data).encode()
|
||||
hdrs = {"Content-Type": "application/json"}
|
||||
if headers:
|
||||
hdrs.update(headers)
|
||||
if HAS_REQUESTS:
|
||||
r = requests.post(url, json=data, headers=hdrs, timeout=30)
|
||||
try:
|
||||
return r.status_code, r.json()
|
||||
except Exception:
|
||||
return r.status_code, {"raw": r.text}
|
||||
else:
|
||||
req = urllib.request.Request(url, data=payload, headers=hdrs, method="POST")
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=30)
|
||||
return resp.status, json.loads(resp.read())
|
||||
except urllib.error.HTTPError as e:
|
||||
return e.code, json.loads(e.read())
|
||||
|
||||
|
||||
class ComfyRunner:
|
||||
def __init__(self, host: str = "http://127.0.0.1:8188", api_key: str = None):
|
||||
self.host = host.rstrip("/")
|
||||
self.api_key = api_key
|
||||
parsed_host = urlparse(self.host).hostname or ""
|
||||
self.is_cloud = parsed_host.lower() == "cloud.comfy.org" or api_key is not None
|
||||
self.client_id = str(uuid.uuid4())
|
||||
|
||||
@property
|
||||
def headers(self) -> dict:
|
||||
h = {}
|
||||
if self.api_key:
|
||||
h["X-API-Key"] = self.api_key
|
||||
return h
|
||||
|
||||
def api_url(self, path: str) -> str:
|
||||
"""Build URL. Cloud uses /api prefix for some endpoints."""
|
||||
if self.is_cloud and not path.startswith("/api"):
|
||||
# Cloud endpoints: /api/prompt, /api/view, /api/job, /api/queue
|
||||
return f"{self.host}/api{path}"
|
||||
return f"{self.host}{path}"
|
||||
|
||||
def check_server(self) -> bool:
|
||||
"""Check if server is reachable."""
|
||||
try:
|
||||
url = self.api_url("/system_stats") if not self.is_cloud else f"{self.host}/api/system_stats"
|
||||
status, _, _ = http_get(url, self.headers)
|
||||
return status == 200
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def submit(self, workflow: dict) -> dict:
|
||||
"""Submit workflow for execution. Returns {prompt_id, node_errors}."""
|
||||
payload = {"prompt": workflow, "client_id": self.client_id}
|
||||
if self.api_key and self.is_cloud:
|
||||
payload.setdefault("extra_data", {})["api_key_comfy_org"] = self.api_key
|
||||
url = self.api_url("/prompt")
|
||||
status, resp = http_post(url, payload, self.headers)
|
||||
if status != 200:
|
||||
return {"error": f"HTTP {status}", "details": resp}
|
||||
return resp
|
||||
|
||||
def poll_status(self, prompt_id: str, timeout: int = 120) -> dict:
|
||||
"""Poll until job completes. Returns final status dict."""
|
||||
start = time.time()
|
||||
poll_interval = 2.0
|
||||
|
||||
while time.time() - start < timeout:
|
||||
if self.is_cloud:
|
||||
# Cloud has a dedicated status endpoint
|
||||
url = f"{self.host}/api/job/{prompt_id}/status"
|
||||
status, body, _ = http_get(url, self.headers)
|
||||
if status == 200:
|
||||
data = json.loads(body) if isinstance(body, bytes) else body
|
||||
job_status = data.get("status", "unknown")
|
||||
if job_status == "completed":
|
||||
return {"status": "success", "data": data}
|
||||
elif job_status == "failed":
|
||||
return {"status": "error", "data": data}
|
||||
elif job_status == "cancelled":
|
||||
return {"status": "cancelled", "data": data}
|
||||
# still running, continue polling
|
||||
else:
|
||||
# Local: check /history/{prompt_id}
|
||||
url = f"{self.host}/history/{prompt_id}"
|
||||
status, body, _ = http_get(url, self.headers)
|
||||
if status == 200:
|
||||
data = json.loads(body) if isinstance(body, bytes) else body
|
||||
if prompt_id in data:
|
||||
entry = data[prompt_id]
|
||||
if entry.get("status", {}).get("completed", False):
|
||||
return {"status": "success", "outputs": entry.get("outputs", {})}
|
||||
if entry.get("status", {}).get("status_str") == "error":
|
||||
return {"status": "error", "data": entry}
|
||||
|
||||
time.sleep(poll_interval)
|
||||
poll_interval = min(poll_interval * 1.2, 10.0)
|
||||
|
||||
return {"status": "timeout", "elapsed": time.time() - start}
|
||||
|
||||
def get_outputs(self, prompt_id: str) -> dict:
|
||||
"""Get output file info from history."""
|
||||
if self.is_cloud:
|
||||
url = f"{self.host}/api/job/{prompt_id}/status"
|
||||
else:
|
||||
url = f"{self.host}/history/{prompt_id}"
|
||||
status, body, _ = http_get(url, self.headers)
|
||||
if status != 200:
|
||||
return {}
|
||||
data = json.loads(body) if isinstance(body, bytes) else body
|
||||
if self.is_cloud:
|
||||
return data.get("outputs", {})
|
||||
if prompt_id in data:
|
||||
return data[prompt_id].get("outputs", {})
|
||||
return {}
|
||||
|
||||
def download_output(self, filename: str, subfolder: str, file_type: str, output_dir: Path) -> Path:
|
||||
"""Download a single output file."""
|
||||
params = urlencode({"filename": filename, "subfolder": subfolder, "type": file_type})
|
||||
url = self.api_url(f"/view?{params}")
|
||||
status, body, _ = http_get(url, self.headers, follow_redirects=True)
|
||||
if status != 200:
|
||||
raise RuntimeError(f"Failed to download {filename}: HTTP {status}")
|
||||
out_path = output_dir / filename
|
||||
out_path.write_bytes(body)
|
||||
return out_path
|
||||
|
||||
|
||||
def load_schema(schema_path: str = None, workflow: dict = None) -> dict:
|
||||
"""Load or generate parameter schema."""
|
||||
if schema_path:
|
||||
with open(schema_path) as f:
|
||||
return json.load(f)
|
||||
# Inline extraction (same logic as extract_schema.py but simplified)
|
||||
if workflow is None:
|
||||
return {"parameters": {}}
|
||||
# Import from sibling script
|
||||
script_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(script_dir))
|
||||
from extract_schema import extract_schema
|
||||
return extract_schema(workflow)
|
||||
|
||||
|
||||
def inject_params(workflow: dict, schema: dict, args: dict) -> dict:
|
||||
"""Inject user parameters into workflow based on schema mapping."""
|
||||
wf = copy.deepcopy(workflow)
|
||||
params = schema.get("parameters", {})
|
||||
|
||||
for param_name, value in args.items():
|
||||
if param_name not in params:
|
||||
print(f"Warning: unknown parameter '{param_name}', skipping", file=sys.stderr)
|
||||
continue
|
||||
mapping = params[param_name]
|
||||
node_id = mapping["node_id"]
|
||||
field = mapping["field"]
|
||||
if node_id in wf and "inputs" in wf[node_id]:
|
||||
wf[node_id]["inputs"][field] = value
|
||||
else:
|
||||
print(f"Warning: node {node_id} not found in workflow", file=sys.stderr)
|
||||
|
||||
return wf
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Run a ComfyUI workflow with parameter injection")
|
||||
parser.add_argument("--workflow", required=True, help="Path to workflow API JSON file")
|
||||
parser.add_argument("--args", default="{}", help="JSON parameters to inject")
|
||||
parser.add_argument("--schema", help="Path to schema JSON (from extract_schema.py). Auto-generated if omitted.")
|
||||
parser.add_argument("--host", default="http://127.0.0.1:8188", help="ComfyUI server URL")
|
||||
parser.add_argument("--api-key", help="API key for cloud (X-API-Key)")
|
||||
parser.add_argument("--output-dir", default="./outputs", help="Directory to save outputs")
|
||||
parser.add_argument("--timeout", type=int, default=120, help="Max seconds to wait for completion")
|
||||
parser.add_argument("--no-download", action="store_true", help="Skip downloading outputs")
|
||||
parser.add_argument("--submit-only", action="store_true", help="Submit and return prompt_id without waiting")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Load workflow
|
||||
workflow_path = Path(args.workflow)
|
||||
if not workflow_path.exists():
|
||||
print(json.dumps({"error": f"Workflow file not found: {args.workflow}"}))
|
||||
sys.exit(1)
|
||||
with open(workflow_path) as f:
|
||||
workflow = json.load(f)
|
||||
|
||||
# Validate format
|
||||
if "nodes" in workflow and "links" in workflow:
|
||||
print(json.dumps({"error": "Workflow is in editor format, not API format. Re-export with 'Save (API Format)'."}))
|
||||
sys.exit(1)
|
||||
|
||||
# Parse user args
|
||||
try:
|
||||
user_args = json.loads(args.args)
|
||||
except json.JSONDecodeError as e:
|
||||
print(json.dumps({"error": f"Invalid --args JSON: {e}"}))
|
||||
sys.exit(1)
|
||||
|
||||
# Load/generate schema and inject params
|
||||
schema = load_schema(args.schema, workflow)
|
||||
if user_args:
|
||||
workflow = inject_params(workflow, schema, user_args)
|
||||
|
||||
# Connect to server
|
||||
runner = ComfyRunner(host=args.host, api_key=args.api_key)
|
||||
|
||||
# Check server
|
||||
if not runner.check_server():
|
||||
print(json.dumps({"error": f"Cannot reach server at {args.host}. Is ComfyUI running?"}))
|
||||
sys.exit(1)
|
||||
|
||||
# Submit
|
||||
result = runner.submit(workflow)
|
||||
if "error" in result:
|
||||
print(json.dumps({"error": "Submission failed", "details": result}))
|
||||
sys.exit(1)
|
||||
|
||||
prompt_id = result.get("prompt_id")
|
||||
if not prompt_id:
|
||||
print(json.dumps({"error": "No prompt_id in response", "response": result}))
|
||||
sys.exit(1)
|
||||
|
||||
# Check for node errors
|
||||
node_errors = result.get("node_errors", {})
|
||||
if node_errors:
|
||||
print(json.dumps({"error": "Workflow validation failed", "node_errors": node_errors}))
|
||||
sys.exit(1)
|
||||
|
||||
if args.submit_only:
|
||||
print(json.dumps({"status": "submitted", "prompt_id": prompt_id}))
|
||||
sys.exit(0)
|
||||
|
||||
# Poll for completion
|
||||
print(f"Submitted: {prompt_id}. Waiting...", file=sys.stderr)
|
||||
poll_result = runner.poll_status(prompt_id, timeout=args.timeout)
|
||||
|
||||
if poll_result["status"] == "timeout":
|
||||
print(json.dumps({"status": "timeout", "prompt_id": prompt_id, "elapsed": poll_result["elapsed"]}))
|
||||
sys.exit(1)
|
||||
elif poll_result["status"] == "error":
|
||||
print(json.dumps({"status": "error", "prompt_id": prompt_id, "details": poll_result.get("data")}))
|
||||
sys.exit(1)
|
||||
elif poll_result["status"] == "cancelled":
|
||||
print(json.dumps({"status": "cancelled", "prompt_id": prompt_id}))
|
||||
sys.exit(1)
|
||||
|
||||
# Download outputs
|
||||
outputs = poll_result.get("outputs") or runner.get_outputs(prompt_id)
|
||||
if args.no_download:
|
||||
print(json.dumps({"status": "success", "prompt_id": prompt_id, "outputs": outputs}))
|
||||
sys.exit(0)
|
||||
|
||||
output_dir = Path(args.output_dir)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
downloaded = []
|
||||
for node_id, node_output in outputs.items():
|
||||
# ComfyUI puts images/videos under "images" key (even for video)
|
||||
for key in ("images", "gifs", "videos", "audio"):
|
||||
if key not in node_output:
|
||||
continue
|
||||
for file_info in node_output[key]:
|
||||
filename = file_info.get("filename", "")
|
||||
subfolder = file_info.get("subfolder", "")
|
||||
file_type = file_info.get("type", "output")
|
||||
if not filename:
|
||||
continue
|
||||
try:
|
||||
out_path = runner.download_output(filename, subfolder, file_type, output_dir)
|
||||
# Detect media type from extension
|
||||
ext = Path(filename).suffix.lower()
|
||||
if ext in (".mp4", ".webm", ".avi", ".mov", ".gif"):
|
||||
media_type = "video"
|
||||
elif ext in (".wav", ".mp3", ".flac", ".ogg"):
|
||||
media_type = "audio"
|
||||
else:
|
||||
media_type = "image"
|
||||
downloaded.append({
|
||||
"file": str(out_path),
|
||||
"node_id": node_id,
|
||||
"type": media_type,
|
||||
"filename": filename,
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"Warning: failed to download {filename}: {e}", file=sys.stderr)
|
||||
|
||||
print(json.dumps({
|
||||
"status": "success",
|
||||
"prompt_id": prompt_id,
|
||||
"outputs": downloaded,
|
||||
}, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user